Jan 05 20:01:01 localhost kernel: Linux version 5.14.0-654.el9.x86_64 (mockbuild@x86-05.stream.rdu2.redhat.com) (gcc (GCC) 11.5.0 20240719 (Red Hat 11.5.0-14), GNU ld version 2.35.2-69.el9) #1 SMP PREEMPT_DYNAMIC Fri Dec 19 08:34:59 UTC 2025
Jan 05 20:01:01 localhost kernel: The list of certified hardware and cloud instances for Red Hat Enterprise Linux 9 can be viewed at the Red Hat Ecosystem Catalog, https://catalog.redhat.com.
Jan 05 20:01:01 localhost kernel: Command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-654.el9.x86_64 root=UUID=f677d6a5-1bcd-4a82-bb95-263d2adaa51b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 05 20:01:01 localhost kernel: BIOS-provided physical RAM map:
Jan 05 20:01:01 localhost kernel: BIOS-e820: [mem 0x0000000000000000-0x000000000009fbff] usable
Jan 05 20:01:01 localhost kernel: BIOS-e820: [mem 0x000000000009fc00-0x000000000009ffff] reserved
Jan 05 20:01:01 localhost kernel: BIOS-e820: [mem 0x00000000000f0000-0x00000000000fffff] reserved
Jan 05 20:01:01 localhost kernel: BIOS-e820: [mem 0x0000000000100000-0x00000000bffdafff] usable
Jan 05 20:01:01 localhost kernel: BIOS-e820: [mem 0x00000000bffdb000-0x00000000bfffffff] reserved
Jan 05 20:01:01 localhost kernel: BIOS-e820: [mem 0x00000000feffc000-0x00000000feffffff] reserved
Jan 05 20:01:01 localhost kernel: BIOS-e820: [mem 0x00000000fffc0000-0x00000000ffffffff] reserved
Jan 05 20:01:01 localhost kernel: BIOS-e820: [mem 0x0000000100000000-0x000000023fffffff] usable
Jan 05 20:01:01 localhost kernel: NX (Execute Disable) protection: active
Jan 05 20:01:01 localhost kernel: APIC: Static calls initialized
Jan 05 20:01:01 localhost kernel: SMBIOS 2.8 present.
Jan 05 20:01:01 localhost kernel: DMI: OpenStack Foundation OpenStack Nova, BIOS 1.15.0-1 04/01/2014
Jan 05 20:01:01 localhost kernel: Hypervisor detected: KVM
Jan 05 20:01:01 localhost kernel: kvm-clock: Using msrs 4b564d01 and 4b564d00
Jan 05 20:01:01 localhost kernel: kvm-clock: using sched offset of 3436675882 cycles
Jan 05 20:01:01 localhost kernel: clocksource: kvm-clock: mask: 0xffffffffffffffff max_cycles: 0x1cd42e4dffb, max_idle_ns: 881590591483 ns
Jan 05 20:01:01 localhost kernel: tsc: Detected 2800.000 MHz processor
Jan 05 20:01:01 localhost kernel: e820: update [mem 0x00000000-0x00000fff] usable ==> reserved
Jan 05 20:01:01 localhost kernel: e820: remove [mem 0x000a0000-0x000fffff] usable
Jan 05 20:01:01 localhost kernel: last_pfn = 0x240000 max_arch_pfn = 0x400000000
Jan 05 20:01:01 localhost kernel: MTRR map: 4 entries (3 fixed + 1 variable; max 19), built from 8 variable MTRRs
Jan 05 20:01:01 localhost kernel: x86/PAT: Configuration [0-7]: WB  WC  UC- UC  WB  WP  UC- WT  
Jan 05 20:01:01 localhost kernel: last_pfn = 0xbffdb max_arch_pfn = 0x400000000
Jan 05 20:01:01 localhost kernel: found SMP MP-table at [mem 0x000f5ae0-0x000f5aef]
Jan 05 20:01:01 localhost kernel: Using GB pages for direct mapping
Jan 05 20:01:01 localhost kernel: RAMDISK: [mem 0x2d462000-0x32a28fff]
Jan 05 20:01:01 localhost kernel: ACPI: Early table checksum verification disabled
Jan 05 20:01:01 localhost kernel: ACPI: RSDP 0x00000000000F5AA0 000014 (v00 BOCHS )
Jan 05 20:01:01 localhost kernel: ACPI: RSDT 0x00000000BFFE16BD 000030 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 05 20:01:01 localhost kernel: ACPI: FACP 0x00000000BFFE1571 000074 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 05 20:01:01 localhost kernel: ACPI: DSDT 0x00000000BFFDFC80 0018F1 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 05 20:01:01 localhost kernel: ACPI: FACS 0x00000000BFFDFC40 000040
Jan 05 20:01:01 localhost kernel: ACPI: APIC 0x00000000BFFE15E5 0000B0 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 05 20:01:01 localhost kernel: ACPI: WAET 0x00000000BFFE1695 000028 (v01 BOCHS  BXPC     00000001 BXPC 00000001)
Jan 05 20:01:01 localhost kernel: ACPI: Reserving FACP table memory at [mem 0xbffe1571-0xbffe15e4]
Jan 05 20:01:01 localhost kernel: ACPI: Reserving DSDT table memory at [mem 0xbffdfc80-0xbffe1570]
Jan 05 20:01:01 localhost kernel: ACPI: Reserving FACS table memory at [mem 0xbffdfc40-0xbffdfc7f]
Jan 05 20:01:01 localhost kernel: ACPI: Reserving APIC table memory at [mem 0xbffe15e5-0xbffe1694]
Jan 05 20:01:01 localhost kernel: ACPI: Reserving WAET table memory at [mem 0xbffe1695-0xbffe16bc]
Jan 05 20:01:01 localhost kernel: No NUMA configuration found
Jan 05 20:01:01 localhost kernel: Faking a node at [mem 0x0000000000000000-0x000000023fffffff]
Jan 05 20:01:01 localhost kernel: NODE_DATA(0) allocated [mem 0x23ffd3000-0x23fffdfff]
Jan 05 20:01:01 localhost kernel: crashkernel reserved: 0x00000000af000000 - 0x00000000bf000000 (256 MB)
Jan 05 20:01:01 localhost kernel: Zone ranges:
Jan 05 20:01:01 localhost kernel:   DMA      [mem 0x0000000000001000-0x0000000000ffffff]
Jan 05 20:01:01 localhost kernel:   DMA32    [mem 0x0000000001000000-0x00000000ffffffff]
Jan 05 20:01:01 localhost kernel:   Normal   [mem 0x0000000100000000-0x000000023fffffff]
Jan 05 20:01:01 localhost kernel:   Device   empty
Jan 05 20:01:01 localhost kernel: Movable zone start for each node
Jan 05 20:01:01 localhost kernel: Early memory node ranges
Jan 05 20:01:01 localhost kernel:   node   0: [mem 0x0000000000001000-0x000000000009efff]
Jan 05 20:01:01 localhost kernel:   node   0: [mem 0x0000000000100000-0x00000000bffdafff]
Jan 05 20:01:01 localhost kernel:   node   0: [mem 0x0000000100000000-0x000000023fffffff]
Jan 05 20:01:01 localhost kernel: Initmem setup node 0 [mem 0x0000000000001000-0x000000023fffffff]
Jan 05 20:01:01 localhost kernel: On node 0, zone DMA: 1 pages in unavailable ranges
Jan 05 20:01:01 localhost kernel: On node 0, zone DMA: 97 pages in unavailable ranges
Jan 05 20:01:01 localhost kernel: On node 0, zone Normal: 37 pages in unavailable ranges
Jan 05 20:01:01 localhost kernel: ACPI: PM-Timer IO Port: 0x608
Jan 05 20:01:01 localhost kernel: ACPI: LAPIC_NMI (acpi_id[0xff] dfl dfl lint[0x1])
Jan 05 20:01:01 localhost kernel: IOAPIC[0]: apic_id 0, version 17, address 0xfec00000, GSI 0-23
Jan 05 20:01:01 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl)
Jan 05 20:01:01 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 5 global_irq 5 high level)
Jan 05 20:01:01 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level)
Jan 05 20:01:01 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 10 global_irq 10 high level)
Jan 05 20:01:01 localhost kernel: ACPI: INT_SRC_OVR (bus 0 bus_irq 11 global_irq 11 high level)
Jan 05 20:01:01 localhost kernel: ACPI: Using ACPI (MADT) for SMP configuration information
Jan 05 20:01:01 localhost kernel: TSC deadline timer available
Jan 05 20:01:01 localhost kernel: CPU topo: Max. logical packages:   8
Jan 05 20:01:01 localhost kernel: CPU topo: Max. logical dies:       8
Jan 05 20:01:01 localhost kernel: CPU topo: Max. dies per package:   1
Jan 05 20:01:01 localhost kernel: CPU topo: Max. threads per core:   1
Jan 05 20:01:01 localhost kernel: CPU topo: Num. cores per package:     1
Jan 05 20:01:01 localhost kernel: CPU topo: Num. threads per package:   1
Jan 05 20:01:01 localhost kernel: CPU topo: Allowing 8 present CPUs plus 0 hotplug CPUs
Jan 05 20:01:01 localhost kernel: kvm-guest: APIC: eoi() replaced with kvm_guest_apic_eoi_write()
Jan 05 20:01:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x00000000-0x00000fff]
Jan 05 20:01:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x0009f000-0x0009ffff]
Jan 05 20:01:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000a0000-0x000effff]
Jan 05 20:01:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0x000f0000-0x000fffff]
Jan 05 20:01:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xbffdb000-0xbfffffff]
Jan 05 20:01:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xc0000000-0xfeffbfff]
Jan 05 20:01:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfeffc000-0xfeffffff]
Jan 05 20:01:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xff000000-0xfffbffff]
Jan 05 20:01:01 localhost kernel: PM: hibernation: Registered nosave memory: [mem 0xfffc0000-0xffffffff]
Jan 05 20:01:01 localhost kernel: [mem 0xc0000000-0xfeffbfff] available for PCI devices
Jan 05 20:01:01 localhost kernel: Booting paravirtualized kernel on KVM
Jan 05 20:01:01 localhost kernel: clocksource: refined-jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1910969940391419 ns
Jan 05 20:01:01 localhost kernel: setup_percpu: NR_CPUS:8192 nr_cpumask_bits:8 nr_cpu_ids:8 nr_node_ids:1
Jan 05 20:01:01 localhost kernel: percpu: Embedded 64 pages/cpu s225280 r8192 d28672 u262144
Jan 05 20:01:01 localhost kernel: pcpu-alloc: s225280 r8192 d28672 u262144 alloc=1*2097152
Jan 05 20:01:01 localhost kernel: pcpu-alloc: [0] 0 1 2 3 4 5 6 7 
Jan 05 20:01:01 localhost kernel: kvm-guest: PV spinlocks disabled, no host support
Jan 05 20:01:01 localhost kernel: Kernel command line: BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-654.el9.x86_64 root=UUID=f677d6a5-1bcd-4a82-bb95-263d2adaa51b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 05 20:01:01 localhost kernel: Unknown kernel command line parameters "BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-654.el9.x86_64", will be passed to user space.
Jan 05 20:01:01 localhost kernel: random: crng init done
Jan 05 20:01:01 localhost kernel: Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear)
Jan 05 20:01:01 localhost kernel: Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes, linear)
Jan 05 20:01:01 localhost kernel: Fallback order for Node 0: 0 
Jan 05 20:01:01 localhost kernel: Built 1 zonelists, mobility grouping on.  Total pages: 2064091
Jan 05 20:01:01 localhost kernel: Policy zone: Normal
Jan 05 20:01:01 localhost kernel: mem auto-init: stack:off, heap alloc:off, heap free:off
Jan 05 20:01:01 localhost kernel: software IO TLB: area num 8.
Jan 05 20:01:01 localhost kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=8, Nodes=1
Jan 05 20:01:01 localhost kernel: ftrace: allocating 49413 entries in 194 pages
Jan 05 20:01:01 localhost kernel: ftrace: allocated 194 pages with 3 groups
Jan 05 20:01:01 localhost kernel: Dynamic Preempt: voluntary
Jan 05 20:01:01 localhost kernel: rcu: Preemptible hierarchical RCU implementation.
Jan 05 20:01:01 localhost kernel: rcu:         RCU event tracing is enabled.
Jan 05 20:01:01 localhost kernel: rcu:         RCU restricting CPUs from NR_CPUS=8192 to nr_cpu_ids=8.
Jan 05 20:01:01 localhost kernel:         Trampoline variant of Tasks RCU enabled.
Jan 05 20:01:01 localhost kernel:         Rude variant of Tasks RCU enabled.
Jan 05 20:01:01 localhost kernel:         Tracing variant of Tasks RCU enabled.
Jan 05 20:01:01 localhost kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies.
Jan 05 20:01:01 localhost kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=8
Jan 05 20:01:01 localhost kernel: RCU Tasks: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 05 20:01:01 localhost kernel: RCU Tasks Rude: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 05 20:01:01 localhost kernel: RCU Tasks Trace: Setting shift to 3 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=8.
Jan 05 20:01:01 localhost kernel: NR_IRQS: 524544, nr_irqs: 488, preallocated irqs: 16
Jan 05 20:01:01 localhost kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention.
Jan 05 20:01:01 localhost kernel: kfence: initialized - using 2097152 bytes for 255 objects at 0x(____ptrval____)-0x(____ptrval____)
Jan 05 20:01:01 localhost kernel: Console: colour VGA+ 80x25
Jan 05 20:01:01 localhost kernel: printk: console [ttyS0] enabled
Jan 05 20:01:01 localhost kernel: ACPI: Core revision 20230331
Jan 05 20:01:01 localhost kernel: APIC: Switch to symmetric I/O mode setup
Jan 05 20:01:01 localhost kernel: x2apic enabled
Jan 05 20:01:01 localhost kernel: APIC: Switched APIC routing to: physical x2apic
Jan 05 20:01:01 localhost kernel: tsc: Marking TSC unstable due to TSCs unsynchronized
Jan 05 20:01:01 localhost kernel: Calibrating delay loop (skipped) preset value.. 5600.00 BogoMIPS (lpj=2800000)
Jan 05 20:01:01 localhost kernel: x86/cpu: User Mode Instruction Prevention (UMIP) activated
Jan 05 20:01:01 localhost kernel: Last level iTLB entries: 4KB 512, 2MB 255, 4MB 127
Jan 05 20:01:01 localhost kernel: Last level dTLB entries: 4KB 512, 2MB 255, 4MB 127, 1GB 0
Jan 05 20:01:01 localhost kernel: Spectre V1 : Mitigation: usercopy/swapgs barriers and __user pointer sanitization
Jan 05 20:01:01 localhost kernel: Spectre V2 : Mitigation: Retpolines
Jan 05 20:01:01 localhost kernel: Spectre V2 : Spectre v2 / SpectreRSB: Filling RSB on context switch and VMEXIT
Jan 05 20:01:01 localhost kernel: Spectre V2 : Enabling Speculation Barrier for firmware calls
Jan 05 20:01:01 localhost kernel: RETBleed: Mitigation: untrained return thunk
Jan 05 20:01:01 localhost kernel: Spectre V2 : mitigation: Enabling conditional Indirect Branch Prediction Barrier
Jan 05 20:01:01 localhost kernel: Speculative Store Bypass: Mitigation: Speculative Store Bypass disabled via prctl
Jan 05 20:01:01 localhost kernel: Speculative Return Stack Overflow: IBPB-extending microcode not applied!
Jan 05 20:01:01 localhost kernel: Speculative Return Stack Overflow: WARNING: See https://kernel.org/doc/html/latest/admin-guide/hw-vuln/srso.html for mitigation options.
Jan 05 20:01:01 localhost kernel: x86/bugs: return thunk changed
Jan 05 20:01:01 localhost kernel: Speculative Return Stack Overflow: Vulnerable: Safe RET, no microcode
Jan 05 20:01:01 localhost kernel: x86/fpu: Supporting XSAVE feature 0x001: 'x87 floating point registers'
Jan 05 20:01:01 localhost kernel: x86/fpu: Supporting XSAVE feature 0x002: 'SSE registers'
Jan 05 20:01:01 localhost kernel: x86/fpu: Supporting XSAVE feature 0x004: 'AVX registers'
Jan 05 20:01:01 localhost kernel: x86/fpu: xstate_offset[2]:  576, xstate_sizes[2]:  256
Jan 05 20:01:01 localhost kernel: x86/fpu: Enabled xstate features 0x7, context size is 832 bytes, using 'compacted' format.
Jan 05 20:01:01 localhost kernel: Freeing SMP alternatives memory: 40K
Jan 05 20:01:01 localhost kernel: pid_max: default: 32768 minimum: 301
Jan 05 20:01:01 localhost kernel: LSM: initializing lsm=lockdown,capability,landlock,yama,integrity,selinux,bpf
Jan 05 20:01:01 localhost kernel: landlock: Up and running.
Jan 05 20:01:01 localhost kernel: Yama: becoming mindful.
Jan 05 20:01:01 localhost kernel: SELinux:  Initializing.
Jan 05 20:01:01 localhost kernel: LSM support for eBPF active
Jan 05 20:01:01 localhost kernel: Mount-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 05 20:01:01 localhost kernel: Mountpoint-cache hash table entries: 16384 (order: 5, 131072 bytes, linear)
Jan 05 20:01:01 localhost kernel: smpboot: CPU0: AMD EPYC-Rome Processor (family: 0x17, model: 0x31, stepping: 0x0)
Jan 05 20:01:01 localhost kernel: Performance Events: Fam17h+ core perfctr, AMD PMU driver.
Jan 05 20:01:01 localhost kernel: ... version:                0
Jan 05 20:01:01 localhost kernel: ... bit width:              48
Jan 05 20:01:01 localhost kernel: ... generic registers:      6
Jan 05 20:01:01 localhost kernel: ... value mask:             0000ffffffffffff
Jan 05 20:01:01 localhost kernel: ... max period:             00007fffffffffff
Jan 05 20:01:01 localhost kernel: ... fixed-purpose events:   0
Jan 05 20:01:01 localhost kernel: ... event mask:             000000000000003f
Jan 05 20:01:01 localhost kernel: signal: max sigframe size: 1776
Jan 05 20:01:01 localhost kernel: rcu: Hierarchical SRCU implementation.
Jan 05 20:01:01 localhost kernel: rcu:         Max phase no-delay instances is 400.
Jan 05 20:01:01 localhost kernel: smp: Bringing up secondary CPUs ...
Jan 05 20:01:01 localhost kernel: smpboot: x86: Booting SMP configuration:
Jan 05 20:01:01 localhost kernel: .... node  #0, CPUs:      #1 #2 #3 #4 #5 #6 #7
Jan 05 20:01:01 localhost kernel: smp: Brought up 1 node, 8 CPUs
Jan 05 20:01:01 localhost kernel: smpboot: Total of 8 processors activated (44800.00 BogoMIPS)
Jan 05 20:01:01 localhost kernel: node 0 deferred pages initialised in 12ms
Jan 05 20:01:01 localhost kernel: Memory: 7763896K/8388068K available (16384K kernel code, 5796K rwdata, 13908K rodata, 4196K init, 7200K bss, 618244K reserved, 0K cma-reserved)
Jan 05 20:01:01 localhost kernel: devtmpfs: initialized
Jan 05 20:01:01 localhost kernel: x86/mm: Memory block size: 128MB
Jan 05 20:01:01 localhost kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns
Jan 05 20:01:01 localhost kernel: futex hash table entries: 2048 (131072 bytes on 1 NUMA nodes, total 128 KiB, linear).
Jan 05 20:01:01 localhost kernel: pinctrl core: initialized pinctrl subsystem
Jan 05 20:01:01 localhost kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family
Jan 05 20:01:01 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL pool for atomic allocations
Jan 05 20:01:01 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations
Jan 05 20:01:01 localhost kernel: DMA: preallocated 1024 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations
Jan 05 20:01:01 localhost kernel: audit: initializing netlink subsys (disabled)
Jan 05 20:01:01 localhost kernel: audit: type=2000 audit(1767643259.632:1): state=initialized audit_enabled=0 res=1
Jan 05 20:01:01 localhost kernel: thermal_sys: Registered thermal governor 'fair_share'
Jan 05 20:01:01 localhost kernel: thermal_sys: Registered thermal governor 'step_wise'
Jan 05 20:01:01 localhost kernel: thermal_sys: Registered thermal governor 'user_space'
Jan 05 20:01:01 localhost kernel: cpuidle: using governor menu
Jan 05 20:01:01 localhost kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5
Jan 05 20:01:01 localhost kernel: PCI: Using configuration type 1 for base access
Jan 05 20:01:01 localhost kernel: PCI: Using configuration type 1 for extended access
Jan 05 20:01:01 localhost kernel: kprobes: kprobe jump-optimization is enabled. All kprobes are optimized if possible.
Jan 05 20:01:01 localhost kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages
Jan 05 20:01:01 localhost kernel: HugeTLB: 16380 KiB vmemmap can be freed for a 1.00 GiB page
Jan 05 20:01:01 localhost kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages
Jan 05 20:01:01 localhost kernel: HugeTLB: 28 KiB vmemmap can be freed for a 2.00 MiB page
Jan 05 20:01:01 localhost kernel: Demotion targets for Node 0: null
Jan 05 20:01:01 localhost kernel: cryptd: max_cpu_qlen set to 1000
Jan 05 20:01:01 localhost kernel: ACPI: Added _OSI(Module Device)
Jan 05 20:01:01 localhost kernel: ACPI: Added _OSI(Processor Device)
Jan 05 20:01:01 localhost kernel: ACPI: Added _OSI(Processor Aggregator Device)
Jan 05 20:01:01 localhost kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded
Jan 05 20:01:01 localhost kernel: ACPI: Interpreter enabled
Jan 05 20:01:01 localhost kernel: ACPI: PM: (supports S0 S3 S4 S5)
Jan 05 20:01:01 localhost kernel: ACPI: Using IOAPIC for interrupt routing
Jan 05 20:01:01 localhost kernel: PCI: Using host bridge windows from ACPI; if necessary, use "pci=nocrs" and report a bug
Jan 05 20:01:01 localhost kernel: PCI: Using E820 reservations for host bridge windows
Jan 05 20:01:01 localhost kernel: ACPI: Enabled 2 GPEs in block 00 to 0F
Jan 05 20:01:01 localhost kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff])
Jan 05 20:01:01 localhost kernel: acpi PNP0A03:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI EDR HPX-Type3]
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [3] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [4] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [5] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [6] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [7] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [8] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [9] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [10] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [11] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [12] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [13] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [14] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [15] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [16] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [17] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [18] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [19] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [20] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [21] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [22] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [23] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [24] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [25] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [26] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [27] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [28] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [29] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [30] registered
Jan 05 20:01:01 localhost kernel: acpiphp: Slot [31] registered
Jan 05 20:01:01 localhost kernel: PCI host bridge to bus 0000:00
Jan 05 20:01:01 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0000-0x0cf7 window]
Jan 05 20:01:01 localhost kernel: pci_bus 0000:00: root bus resource [io  0x0d00-0xffff window]
Jan 05 20:01:01 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x000a0000-0x000bffff window]
Jan 05 20:01:01 localhost kernel: pci_bus 0000:00: root bus resource [mem 0xc0000000-0xfebfffff window]
Jan 05 20:01:01 localhost kernel: pci_bus 0000:00: root bus resource [mem 0x240000000-0x2bfffffff window]
Jan 05 20:01:01 localhost kernel: pci_bus 0000:00: root bus resource [bus 00-ff]
Jan 05 20:01:01 localhost kernel: pci 0000:00:00.0: [8086:1237] type 00 class 0x060000 conventional PCI endpoint
Jan 05 20:01:01 localhost kernel: pci 0000:00:01.0: [8086:7000] type 00 class 0x060100 conventional PCI endpoint
Jan 05 20:01:01 localhost kernel: pci 0000:00:01.1: [8086:7010] type 00 class 0x010180 conventional PCI endpoint
Jan 05 20:01:01 localhost kernel: pci 0000:00:01.1: BAR 4 [io  0xc140-0xc14f]
Jan 05 20:01:01 localhost kernel: pci 0000:00:01.1: BAR 0 [io  0x01f0-0x01f7]: legacy IDE quirk
Jan 05 20:01:01 localhost kernel: pci 0000:00:01.1: BAR 1 [io  0x03f6]: legacy IDE quirk
Jan 05 20:01:01 localhost kernel: pci 0000:00:01.1: BAR 2 [io  0x0170-0x0177]: legacy IDE quirk
Jan 05 20:01:01 localhost kernel: pci 0000:00:01.1: BAR 3 [io  0x0376]: legacy IDE quirk
Jan 05 20:01:01 localhost kernel: pci 0000:00:01.2: [8086:7020] type 00 class 0x0c0300 conventional PCI endpoint
Jan 05 20:01:01 localhost kernel: pci 0000:00:01.2: BAR 4 [io  0xc100-0xc11f]
Jan 05 20:01:01 localhost kernel: pci 0000:00:01.3: [8086:7113] type 00 class 0x068000 conventional PCI endpoint
Jan 05 20:01:01 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0600-0x063f] claimed by PIIX4 ACPI
Jan 05 20:01:01 localhost kernel: pci 0000:00:01.3: quirk: [io  0x0700-0x070f] claimed by PIIX4 SMB
Jan 05 20:01:01 localhost kernel: pci 0000:00:02.0: [1af4:1050] type 00 class 0x030000 conventional PCI endpoint
Jan 05 20:01:01 localhost kernel: pci 0000:00:02.0: BAR 0 [mem 0xfe000000-0xfe7fffff pref]
Jan 05 20:01:01 localhost kernel: pci 0000:00:02.0: BAR 2 [mem 0xfe800000-0xfe803fff 64bit pref]
Jan 05 20:01:01 localhost kernel: pci 0000:00:02.0: BAR 4 [mem 0xfeb90000-0xfeb90fff]
Jan 05 20:01:01 localhost kernel: pci 0000:00:02.0: ROM [mem 0xfeb80000-0xfeb8ffff pref]
Jan 05 20:01:01 localhost kernel: pci 0000:00:02.0: Video device with shadowed ROM at [mem 0x000c0000-0x000dffff]
Jan 05 20:01:01 localhost kernel: pci 0000:00:03.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 05 20:01:01 localhost kernel: pci 0000:00:03.0: BAR 0 [io  0xc080-0xc0bf]
Jan 05 20:01:01 localhost kernel: pci 0000:00:03.0: BAR 1 [mem 0xfeb91000-0xfeb91fff]
Jan 05 20:01:01 localhost kernel: pci 0000:00:03.0: BAR 4 [mem 0xfe804000-0xfe807fff 64bit pref]
Jan 05 20:01:01 localhost kernel: pci 0000:00:03.0: ROM [mem 0xfeb00000-0xfeb7ffff pref]
Jan 05 20:01:01 localhost kernel: pci 0000:00:04.0: [1af4:1001] type 00 class 0x010000 conventional PCI endpoint
Jan 05 20:01:01 localhost kernel: pci 0000:00:04.0: BAR 0 [io  0xc000-0xc07f]
Jan 05 20:01:01 localhost kernel: pci 0000:00:04.0: BAR 1 [mem 0xfeb92000-0xfeb92fff]
Jan 05 20:01:01 localhost kernel: pci 0000:00:04.0: BAR 4 [mem 0xfe808000-0xfe80bfff 64bit pref]
Jan 05 20:01:01 localhost kernel: pci 0000:00:05.0: [1af4:1002] type 00 class 0x00ff00 conventional PCI endpoint
Jan 05 20:01:01 localhost kernel: pci 0000:00:05.0: BAR 0 [io  0xc0c0-0xc0ff]
Jan 05 20:01:01 localhost kernel: pci 0000:00:05.0: BAR 4 [mem 0xfe80c000-0xfe80ffff 64bit pref]
Jan 05 20:01:01 localhost kernel: pci 0000:00:06.0: [1af4:1005] type 00 class 0x00ff00 conventional PCI endpoint
Jan 05 20:01:01 localhost kernel: pci 0000:00:06.0: BAR 0 [io  0xc120-0xc13f]
Jan 05 20:01:01 localhost kernel: pci 0000:00:06.0: BAR 4 [mem 0xfe810000-0xfe813fff 64bit pref]
Jan 05 20:01:01 localhost kernel: ACPI: PCI: Interrupt link LNKA configured for IRQ 10
Jan 05 20:01:01 localhost kernel: ACPI: PCI: Interrupt link LNKB configured for IRQ 10
Jan 05 20:01:01 localhost kernel: ACPI: PCI: Interrupt link LNKC configured for IRQ 11
Jan 05 20:01:01 localhost kernel: ACPI: PCI: Interrupt link LNKD configured for IRQ 11
Jan 05 20:01:01 localhost kernel: ACPI: PCI: Interrupt link LNKS configured for IRQ 9
Jan 05 20:01:01 localhost kernel: iommu: Default domain type: Translated
Jan 05 20:01:01 localhost kernel: iommu: DMA domain TLB invalidation policy: lazy mode
Jan 05 20:01:01 localhost kernel: SCSI subsystem initialized
Jan 05 20:01:01 localhost kernel: ACPI: bus type USB registered
Jan 05 20:01:01 localhost kernel: usbcore: registered new interface driver usbfs
Jan 05 20:01:01 localhost kernel: usbcore: registered new interface driver hub
Jan 05 20:01:01 localhost kernel: usbcore: registered new device driver usb
Jan 05 20:01:01 localhost kernel: pps_core: LinuxPPS API ver. 1 registered
Jan 05 20:01:01 localhost kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti <giometti@linux.it>
Jan 05 20:01:01 localhost kernel: PTP clock support registered
Jan 05 20:01:01 localhost kernel: EDAC MC: Ver: 3.0.0
Jan 05 20:01:01 localhost kernel: NetLabel: Initializing
Jan 05 20:01:01 localhost kernel: NetLabel:  domain hash size = 128
Jan 05 20:01:01 localhost kernel: NetLabel:  protocols = UNLABELED CIPSOv4 CALIPSO
Jan 05 20:01:01 localhost kernel: NetLabel:  unlabeled traffic allowed by default
Jan 05 20:01:01 localhost kernel: PCI: Using ACPI for IRQ routing
Jan 05 20:01:01 localhost kernel: PCI: pci_cache_line_size set to 64 bytes
Jan 05 20:01:01 localhost kernel: e820: reserve RAM buffer [mem 0x0009fc00-0x0009ffff]
Jan 05 20:01:01 localhost kernel: e820: reserve RAM buffer [mem 0xbffdb000-0xbfffffff]
Jan 05 20:01:01 localhost kernel: pci 0000:00:02.0: vgaarb: setting as boot VGA device
Jan 05 20:01:01 localhost kernel: pci 0000:00:02.0: vgaarb: bridge control possible
Jan 05 20:01:01 localhost kernel: pci 0000:00:02.0: vgaarb: VGA device added: decodes=io+mem,owns=io+mem,locks=none
Jan 05 20:01:01 localhost kernel: vgaarb: loaded
Jan 05 20:01:01 localhost kernel: clocksource: Switched to clocksource kvm-clock
Jan 05 20:01:01 localhost kernel: VFS: Disk quotas dquot_6.6.0
Jan 05 20:01:01 localhost kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes)
Jan 05 20:01:01 localhost kernel: pnp: PnP ACPI init
Jan 05 20:01:01 localhost kernel: pnp 00:03: [dma 2]
Jan 05 20:01:01 localhost kernel: pnp: PnP ACPI: found 5 devices
Jan 05 20:01:01 localhost kernel: clocksource: acpi_pm: mask: 0xffffff max_cycles: 0xffffff, max_idle_ns: 2085701024 ns
Jan 05 20:01:01 localhost kernel: NET: Registered PF_INET protocol family
Jan 05 20:01:01 localhost kernel: IP idents hash table entries: 131072 (order: 8, 1048576 bytes, linear)
Jan 05 20:01:01 localhost kernel: tcp_listen_portaddr_hash hash table entries: 4096 (order: 4, 65536 bytes, linear)
Jan 05 20:01:01 localhost kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear)
Jan 05 20:01:01 localhost kernel: TCP established hash table entries: 65536 (order: 7, 524288 bytes, linear)
Jan 05 20:01:01 localhost kernel: TCP bind hash table entries: 65536 (order: 8, 1048576 bytes, linear)
Jan 05 20:01:01 localhost kernel: TCP: Hash tables configured (established 65536 bind 65536)
Jan 05 20:01:01 localhost kernel: MPTCP token hash table entries: 8192 (order: 5, 196608 bytes, linear)
Jan 05 20:01:01 localhost kernel: UDP hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 05 20:01:01 localhost kernel: UDP-Lite hash table entries: 4096 (order: 5, 131072 bytes, linear)
Jan 05 20:01:01 localhost kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family
Jan 05 20:01:01 localhost kernel: NET: Registered PF_XDP protocol family
Jan 05 20:01:01 localhost kernel: pci_bus 0000:00: resource 4 [io  0x0000-0x0cf7 window]
Jan 05 20:01:01 localhost kernel: pci_bus 0000:00: resource 5 [io  0x0d00-0xffff window]
Jan 05 20:01:01 localhost kernel: pci_bus 0000:00: resource 6 [mem 0x000a0000-0x000bffff window]
Jan 05 20:01:01 localhost kernel: pci_bus 0000:00: resource 7 [mem 0xc0000000-0xfebfffff window]
Jan 05 20:01:01 localhost kernel: pci_bus 0000:00: resource 8 [mem 0x240000000-0x2bfffffff window]
Jan 05 20:01:01 localhost kernel: pci 0000:00:01.0: PIIX3: Enabling Passive Release
Jan 05 20:01:01 localhost kernel: pci 0000:00:00.0: Limiting direct PCI/PCI transfers
Jan 05 20:01:01 localhost kernel: ACPI: \_SB_.LNKD: Enabled at IRQ 11
Jan 05 20:01:01 localhost kernel: pci 0000:00:01.2: quirk_usb_early_handoff+0x0/0x160 took 100769 usecs
Jan 05 20:01:01 localhost kernel: PCI: CLS 0 bytes, default 64
Jan 05 20:01:01 localhost kernel: PCI-DMA: Using software bounce buffering for IO (SWIOTLB)
Jan 05 20:01:01 localhost kernel: software IO TLB: mapped [mem 0x00000000ab000000-0x00000000af000000] (64MB)
Jan 05 20:01:01 localhost kernel: ACPI: bus type thunderbolt registered
Jan 05 20:01:01 localhost kernel: Trying to unpack rootfs image as initramfs...
Jan 05 20:01:01 localhost kernel: Initialise system trusted keyrings
Jan 05 20:01:01 localhost kernel: Key type blacklist registered
Jan 05 20:01:01 localhost kernel: workingset: timestamp_bits=36 max_order=21 bucket_order=0
Jan 05 20:01:01 localhost kernel: zbud: loaded
Jan 05 20:01:01 localhost kernel: integrity: Platform Keyring initialized
Jan 05 20:01:01 localhost kernel: integrity: Machine keyring initialized
Jan 05 20:01:01 localhost kernel: Freeing initrd memory: 87836K
Jan 05 20:01:01 localhost kernel: NET: Registered PF_ALG protocol family
Jan 05 20:01:01 localhost kernel: xor: automatically using best checksumming function   avx       
Jan 05 20:01:01 localhost kernel: Key type asymmetric registered
Jan 05 20:01:01 localhost kernel: Asymmetric key parser 'x509' registered
Jan 05 20:01:01 localhost kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 246)
Jan 05 20:01:01 localhost kernel: io scheduler mq-deadline registered
Jan 05 20:01:01 localhost kernel: io scheduler kyber registered
Jan 05 20:01:01 localhost kernel: io scheduler bfq registered
Jan 05 20:01:01 localhost kernel: atomic64_test: passed for x86-64 platform with CX8 and with SSE
Jan 05 20:01:01 localhost kernel: shpchp: Standard Hot Plug PCI Controller Driver version: 0.4
Jan 05 20:01:01 localhost kernel: input: Power Button as /devices/LNXSYSTM:00/LNXPWRBN:00/input/input0
Jan 05 20:01:01 localhost kernel: ACPI: button: Power Button [PWRF]
Jan 05 20:01:01 localhost kernel: ACPI: \_SB_.LNKB: Enabled at IRQ 10
Jan 05 20:01:01 localhost kernel: ACPI: \_SB_.LNKC: Enabled at IRQ 11
Jan 05 20:01:01 localhost kernel: ACPI: \_SB_.LNKA: Enabled at IRQ 10
Jan 05 20:01:01 localhost kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled
Jan 05 20:01:01 localhost kernel: 00:00: ttyS0 at I/O 0x3f8 (irq = 4, base_baud = 115200) is a 16550A
Jan 05 20:01:01 localhost kernel: Non-volatile memory driver v1.3
Jan 05 20:01:01 localhost kernel: rdac: device handler registered
Jan 05 20:01:01 localhost kernel: hp_sw: device handler registered
Jan 05 20:01:01 localhost kernel: emc: device handler registered
Jan 05 20:01:01 localhost kernel: alua: device handler registered
Jan 05 20:01:01 localhost kernel: uhci_hcd 0000:00:01.2: UHCI Host Controller
Jan 05 20:01:01 localhost kernel: uhci_hcd 0000:00:01.2: new USB bus registered, assigned bus number 1
Jan 05 20:01:01 localhost kernel: uhci_hcd 0000:00:01.2: detected 2 ports
Jan 05 20:01:01 localhost kernel: uhci_hcd 0000:00:01.2: irq 11, io port 0x0000c100
Jan 05 20:01:01 localhost kernel: usb usb1: New USB device found, idVendor=1d6b, idProduct=0001, bcdDevice= 5.14
Jan 05 20:01:01 localhost kernel: usb usb1: New USB device strings: Mfr=3, Product=2, SerialNumber=1
Jan 05 20:01:01 localhost kernel: usb usb1: Product: UHCI Host Controller
Jan 05 20:01:01 localhost kernel: usb usb1: Manufacturer: Linux 5.14.0-654.el9.x86_64 uhci_hcd
Jan 05 20:01:01 localhost kernel: usb usb1: SerialNumber: 0000:00:01.2
Jan 05 20:01:01 localhost kernel: hub 1-0:1.0: USB hub found
Jan 05 20:01:01 localhost kernel: hub 1-0:1.0: 2 ports detected
Jan 05 20:01:01 localhost kernel: usbcore: registered new interface driver usbserial_generic
Jan 05 20:01:01 localhost kernel: usbserial: USB Serial support registered for generic
Jan 05 20:01:01 localhost kernel: i8042: PNP: PS/2 Controller [PNP0303:KBD,PNP0f13:MOU] at 0x60,0x64 irq 1,12
Jan 05 20:01:01 localhost kernel: serio: i8042 KBD port at 0x60,0x64 irq 1
Jan 05 20:01:01 localhost kernel: serio: i8042 AUX port at 0x60,0x64 irq 12
Jan 05 20:01:01 localhost kernel: mousedev: PS/2 mouse device common for all mice
Jan 05 20:01:01 localhost kernel: rtc_cmos 00:04: RTC can wake from S4
Jan 05 20:01:01 localhost kernel: rtc_cmos 00:04: registered as rtc0
Jan 05 20:01:01 localhost kernel: rtc_cmos 00:04: setting system clock to 2026-01-05T20:01:00 UTC (1767643260)
Jan 05 20:01:01 localhost kernel: rtc_cmos 00:04: alarms up to one day, y3k, 242 bytes nvram
Jan 05 20:01:01 localhost kernel: input: AT Translated Set 2 keyboard as /devices/platform/i8042/serio0/input/input1
Jan 05 20:01:01 localhost kernel: amd_pstate: the _CPC object is not present in SBIOS or ACPI disabled
Jan 05 20:01:01 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input4
Jan 05 20:01:01 localhost kernel: hid: raw HID events driver (C) Jiri Kosina
Jan 05 20:01:01 localhost kernel: usbcore: registered new interface driver usbhid
Jan 05 20:01:01 localhost kernel: usbhid: USB HID core driver
Jan 05 20:01:01 localhost kernel: drop_monitor: Initializing network drop monitor service
Jan 05 20:01:01 localhost kernel: input: VirtualPS/2 VMware VMMouse as /devices/platform/i8042/serio1/input/input3
Jan 05 20:01:01 localhost kernel: Initializing XFRM netlink socket
Jan 05 20:01:01 localhost kernel: NET: Registered PF_INET6 protocol family
Jan 05 20:01:01 localhost kernel: Segment Routing with IPv6
Jan 05 20:01:01 localhost kernel: NET: Registered PF_PACKET protocol family
Jan 05 20:01:01 localhost kernel: mpls_gso: MPLS GSO support
Jan 05 20:01:01 localhost kernel: IPI shorthand broadcast: enabled
Jan 05 20:01:01 localhost kernel: AVX2 version of gcm_enc/dec engaged.
Jan 05 20:01:01 localhost kernel: AES CTR mode by8 optimization enabled
Jan 05 20:01:01 localhost kernel: sched_clock: Marking stable (1312005180, 147820960)->(1583332039, -123505899)
Jan 05 20:01:01 localhost kernel: registered taskstats version 1
Jan 05 20:01:01 localhost kernel: Loading compiled-in X.509 certificates
Jan 05 20:01:01 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 1033950e50bfbfa81c0905119b09a8a13ebc27cf'
Jan 05 20:01:01 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux Driver Update Program (key 3): bf57f3e87362bc7229d9f465321773dfd1f77a80'
Jan 05 20:01:01 localhost kernel: Loaded X.509 cert 'Red Hat Enterprise Linux kpatch signing key: 4d38fd864ebe18c5f0b72e3852e2014c3a676fc8'
Jan 05 20:01:01 localhost kernel: Loaded X.509 cert 'RH-IMA-CA: Red Hat IMA CA: fb31825dd0e073685b264e3038963673f753959a'
Jan 05 20:01:01 localhost kernel: Loaded X.509 cert 'Nvidia GPU OOT signing 001: 55e1cef88193e60419f0b0ec379c49f77545acf0'
Jan 05 20:01:01 localhost kernel: Demotion targets for Node 0: null
Jan 05 20:01:01 localhost kernel: page_owner is disabled
Jan 05 20:01:01 localhost kernel: Key type .fscrypt registered
Jan 05 20:01:01 localhost kernel: Key type fscrypt-provisioning registered
Jan 05 20:01:01 localhost kernel: Key type big_key registered
Jan 05 20:01:01 localhost kernel: Key type encrypted registered
Jan 05 20:01:01 localhost kernel: ima: No TPM chip found, activating TPM-bypass!
Jan 05 20:01:01 localhost kernel: Loading compiled-in module X.509 certificates
Jan 05 20:01:01 localhost kernel: Loaded X.509 cert 'The CentOS Project: CentOS Stream kernel signing key: 1033950e50bfbfa81c0905119b09a8a13ebc27cf'
Jan 05 20:01:01 localhost kernel: ima: Allocated hash algorithm: sha256
Jan 05 20:01:01 localhost kernel: ima: No architecture policies found
Jan 05 20:01:01 localhost kernel: evm: Initialising EVM extended attributes:
Jan 05 20:01:01 localhost kernel: evm: security.selinux
Jan 05 20:01:01 localhost kernel: evm: security.SMACK64 (disabled)
Jan 05 20:01:01 localhost kernel: evm: security.SMACK64EXEC (disabled)
Jan 05 20:01:01 localhost kernel: evm: security.SMACK64TRANSMUTE (disabled)
Jan 05 20:01:01 localhost kernel: evm: security.SMACK64MMAP (disabled)
Jan 05 20:01:01 localhost kernel: evm: security.apparmor (disabled)
Jan 05 20:01:01 localhost kernel: evm: security.ima
Jan 05 20:01:01 localhost kernel: evm: security.capability
Jan 05 20:01:01 localhost kernel: evm: HMAC attrs: 0x1
Jan 05 20:01:01 localhost kernel: usb 1-1: new full-speed USB device number 2 using uhci_hcd
Jan 05 20:01:01 localhost kernel: Running certificate verification RSA selftest
Jan 05 20:01:01 localhost kernel: Loaded X.509 cert 'Certificate verification self-testing key: f58703bb33ce1b73ee02eccdee5b8817518fe3db'
Jan 05 20:01:01 localhost kernel: Running certificate verification ECDSA selftest
Jan 05 20:01:01 localhost kernel: Loaded X.509 cert 'Certificate verification ECDSA self-testing key: 2900bcea1deb7bc8479a84a23d758efdfdd2b2d3'
Jan 05 20:01:01 localhost kernel: clk: Disabling unused clocks
Jan 05 20:01:01 localhost kernel: Freeing unused decrypted memory: 2028K
Jan 05 20:01:01 localhost kernel: Freeing unused kernel image (initmem) memory: 4196K
Jan 05 20:01:01 localhost kernel: usb 1-1: New USB device found, idVendor=0627, idProduct=0001, bcdDevice= 0.00
Jan 05 20:01:01 localhost kernel: usb 1-1: New USB device strings: Mfr=1, Product=3, SerialNumber=10
Jan 05 20:01:01 localhost kernel: usb 1-1: Product: QEMU USB Tablet
Jan 05 20:01:01 localhost kernel: usb 1-1: Manufacturer: QEMU
Jan 05 20:01:01 localhost kernel: usb 1-1: SerialNumber: 28754-0000:00:01.2-1
Jan 05 20:01:01 localhost kernel: Write protecting the kernel read-only data: 30720k
Jan 05 20:01:01 localhost kernel: Freeing unused kernel image (rodata/data gap) memory: 428K
Jan 05 20:01:01 localhost kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.2/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input5
Jan 05 20:01:01 localhost kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:00:01.2-1/input0
Jan 05 20:01:01 localhost kernel: x86/mm: Checked W+X mappings: passed, no W+X pages found.
Jan 05 20:01:01 localhost kernel: Run /init as init process
Jan 05 20:01:01 localhost kernel:   with arguments:
Jan 05 20:01:01 localhost kernel:     /init
Jan 05 20:01:01 localhost kernel:   with environment:
Jan 05 20:01:01 localhost kernel:     HOME=/
Jan 05 20:01:01 localhost kernel:     TERM=linux
Jan 05 20:01:01 localhost kernel:     BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-654.el9.x86_64
Jan 05 20:01:01 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 05 20:01:01 localhost systemd[1]: Detected virtualization kvm.
Jan 05 20:01:01 localhost systemd[1]: Detected architecture x86-64.
Jan 05 20:01:01 localhost systemd[1]: Running in initrd.
Jan 05 20:01:01 localhost systemd[1]: No hostname configured, using default hostname.
Jan 05 20:01:01 localhost systemd[1]: Hostname set to <localhost>.
Jan 05 20:01:01 localhost systemd[1]: Initializing machine ID from VM UUID.
Jan 05 20:01:01 localhost systemd[1]: Queued start job for default target Initrd Default Target.
Jan 05 20:01:01 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 05 20:01:01 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 05 20:01:01 localhost systemd[1]: Reached target Initrd /usr File System.
Jan 05 20:01:01 localhost systemd[1]: Reached target Local File Systems.
Jan 05 20:01:01 localhost systemd[1]: Reached target Path Units.
Jan 05 20:01:01 localhost systemd[1]: Reached target Slice Units.
Jan 05 20:01:01 localhost systemd[1]: Reached target Swaps.
Jan 05 20:01:01 localhost systemd[1]: Reached target Timer Units.
Jan 05 20:01:01 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 05 20:01:01 localhost systemd[1]: Listening on Journal Socket (/dev/log).
Jan 05 20:01:01 localhost systemd[1]: Listening on Journal Socket.
Jan 05 20:01:01 localhost systemd[1]: Listening on udev Control Socket.
Jan 05 20:01:01 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 05 20:01:01 localhost systemd[1]: Reached target Socket Units.
Jan 05 20:01:01 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 05 20:01:01 localhost systemd[1]: Starting Journal Service...
Jan 05 20:01:01 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 05 20:01:01 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 05 20:01:01 localhost systemd[1]: Starting Create System Users...
Jan 05 20:01:01 localhost systemd[1]: Starting Setup Virtual Console...
Jan 05 20:01:01 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 05 20:01:01 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 05 20:01:01 localhost systemd[1]: Finished Create System Users.
Jan 05 20:01:01 localhost systemd-journald[304]: Journal started
Jan 05 20:01:01 localhost systemd-journald[304]: Runtime Journal (/run/log/journal/103e5390173f4d3f998322472b3a8bf4) is 8.0M, max 153.6M, 145.6M free.
Jan 05 20:01:01 localhost systemd-sysusers[308]: Creating group 'users' with GID 100.
Jan 05 20:01:01 localhost systemd-sysusers[308]: Creating group 'dbus' with GID 81.
Jan 05 20:01:01 localhost systemd-sysusers[308]: Creating user 'dbus' (System Message Bus) with UID 81 and GID 81.
Jan 05 20:01:01 localhost systemd[1]: Started Journal Service.
Jan 05 20:01:01 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 05 20:01:01 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 05 20:01:01 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 05 20:01:01 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 05 20:01:01 localhost systemd[1]: Finished Setup Virtual Console.
Jan 05 20:01:01 localhost systemd[1]: dracut ask for additional cmdline parameters was skipped because no trigger condition checks were met.
Jan 05 20:01:01 localhost systemd[1]: Starting dracut cmdline hook...
Jan 05 20:01:01 localhost dracut-cmdline[324]: dracut-9 dracut-057-102.git20250818.el9
Jan 05 20:01:01 localhost dracut-cmdline[324]: Using kernel command line parameters:    BOOT_IMAGE=(hd0,msdos1)/boot/vmlinuz-5.14.0-654.el9.x86_64 root=UUID=f677d6a5-1bcd-4a82-bb95-263d2adaa51b ro console=ttyS0,115200n8 no_timer_check net.ifnames=0 crashkernel=1G-2G:192M,2G-64G:256M,64G-:512M
Jan 05 20:01:01 localhost systemd[1]: Finished dracut cmdline hook.
Jan 05 20:01:01 localhost systemd[1]: Starting dracut pre-udev hook...
Jan 05 20:01:01 localhost kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log.
Jan 05 20:01:01 localhost kernel: device-mapper: uevent: version 1.0.3
Jan 05 20:01:01 localhost kernel: device-mapper: ioctl: 4.50.0-ioctl (2025-04-28) initialised: dm-devel@lists.linux.dev
Jan 05 20:01:01 localhost kernel: RPC: Registered named UNIX socket transport module.
Jan 05 20:01:01 localhost kernel: RPC: Registered udp transport module.
Jan 05 20:01:01 localhost kernel: RPC: Registered tcp transport module.
Jan 05 20:01:01 localhost kernel: RPC: Registered tcp-with-tls transport module.
Jan 05 20:01:01 localhost kernel: RPC: Registered tcp NFSv4.1 backchannel transport module.
Jan 05 20:01:01 localhost rpc.statd[442]: Version 2.5.4 starting
Jan 05 20:01:01 localhost rpc.statd[442]: Initializing NSM state
Jan 05 20:01:01 localhost rpc.idmapd[447]: Setting log level to 0
Jan 05 20:01:01 localhost systemd[1]: Finished dracut pre-udev hook.
Jan 05 20:01:02 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 05 20:01:02 localhost systemd-udevd[460]: Using default interface naming scheme 'rhel-9.0'.
Jan 05 20:01:02 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 05 20:01:02 localhost systemd[1]: Starting dracut pre-trigger hook...
Jan 05 20:01:02 localhost systemd[1]: Finished dracut pre-trigger hook.
Jan 05 20:01:02 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 05 20:01:02 localhost systemd[1]: Created slice Slice /system/modprobe.
Jan 05 20:01:02 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 05 20:01:02 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 05 20:01:02 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 05 20:01:02 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 05 20:01:02 localhost systemd[1]: Mounting Kernel Configuration File System...
Jan 05 20:01:02 localhost systemd[1]: nm-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 05 20:01:02 localhost systemd[1]: Reached target Network.
Jan 05 20:01:02 localhost systemd[1]: nm-wait-online-initrd.service was skipped because of an unmet condition check (ConditionPathExists=/run/NetworkManager/initrd/neednet).
Jan 05 20:01:02 localhost systemd[1]: Starting dracut initqueue hook...
Jan 05 20:01:02 localhost kernel: virtio_blk virtio2: 8/0/0 default/read/poll queues
Jan 05 20:01:02 localhost systemd[1]: Mounted Kernel Configuration File System.
Jan 05 20:01:02 localhost systemd[1]: Reached target System Initialization.
Jan 05 20:01:02 localhost kernel: virtio_blk virtio2: [vda] 167772160 512-byte logical blocks (85.9 GB/80.0 GiB)
Jan 05 20:01:02 localhost kernel:  vda: vda1
Jan 05 20:01:02 localhost systemd[1]: Reached target Basic System.
Jan 05 20:01:02 localhost kernel: libata version 3.00 loaded.
Jan 05 20:01:02 localhost kernel: ata_piix 0000:00:01.1: version 2.13
Jan 05 20:01:02 localhost kernel: scsi host0: ata_piix
Jan 05 20:01:02 localhost kernel: scsi host1: ata_piix
Jan 05 20:01:02 localhost kernel: ata1: PATA max MWDMA2 cmd 0x1f0 ctl 0x3f6 bmdma 0xc140 irq 14 lpm-pol 0
Jan 05 20:01:02 localhost kernel: ata2: PATA max MWDMA2 cmd 0x170 ctl 0x376 bmdma 0xc148 irq 15 lpm-pol 0
Jan 05 20:01:02 localhost systemd[1]: Found device /dev/disk/by-uuid/f677d6a5-1bcd-4a82-bb95-263d2adaa51b.
Jan 05 20:01:02 localhost systemd[1]: Reached target Initrd Root Device.
Jan 05 20:01:02 localhost kernel: ata1: found unknown device (class 0)
Jan 05 20:01:02 localhost kernel: ata1.00: ATAPI: QEMU DVD-ROM, 2.5+, max UDMA/100
Jan 05 20:01:02 localhost kernel: scsi 0:0:0:0: CD-ROM            QEMU     QEMU DVD-ROM     2.5+ PQ: 0 ANSI: 5
Jan 05 20:01:02 localhost systemd-udevd[474]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 20:01:02 localhost kernel: scsi 0:0:0:0: Attached scsi generic sg0 type 5
Jan 05 20:01:02 localhost kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 4x/4x cd/rw xa/form2 tray
Jan 05 20:01:02 localhost kernel: cdrom: Uniform CD-ROM driver Revision: 3.20
Jan 05 20:01:02 localhost kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0
Jan 05 20:01:02 localhost systemd[1]: Finished dracut initqueue hook.
Jan 05 20:01:02 localhost systemd[1]: Reached target Preparation for Remote File Systems.
Jan 05 20:01:02 localhost systemd[1]: Reached target Remote Encrypted Volumes.
Jan 05 20:01:02 localhost systemd[1]: Reached target Remote File Systems.
Jan 05 20:01:02 localhost systemd[1]: Starting dracut pre-mount hook...
Jan 05 20:01:02 localhost systemd[1]: Finished dracut pre-mount hook.
Jan 05 20:01:02 localhost systemd[1]: Starting File System Check on /dev/disk/by-uuid/f677d6a5-1bcd-4a82-bb95-263d2adaa51b...
Jan 05 20:01:02 localhost systemd-fsck[554]: /usr/sbin/fsck.xfs: XFS file system.
Jan 05 20:01:02 localhost systemd[1]: Finished File System Check on /dev/disk/by-uuid/f677d6a5-1bcd-4a82-bb95-263d2adaa51b.
Jan 05 20:01:02 localhost systemd[1]: Mounting /sysroot...
Jan 05 20:01:03 localhost kernel: SGI XFS with ACLs, security attributes, scrub, quota, no debug enabled
Jan 05 20:01:03 localhost kernel: XFS (vda1): Mounting V5 Filesystem f677d6a5-1bcd-4a82-bb95-263d2adaa51b
Jan 05 20:01:03 localhost kernel: XFS (vda1): Ending clean mount
Jan 05 20:01:03 localhost systemd[1]: Mounted /sysroot.
Jan 05 20:01:03 localhost systemd[1]: Reached target Initrd Root File System.
Jan 05 20:01:03 localhost systemd[1]: Starting Mountpoints Configured in the Real Root...
Jan 05 20:01:03 localhost systemd[1]: initrd-parse-etc.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Finished Mountpoints Configured in the Real Root.
Jan 05 20:01:03 localhost systemd[1]: Reached target Initrd File Systems.
Jan 05 20:01:03 localhost systemd[1]: Reached target Initrd Default Target.
Jan 05 20:01:03 localhost systemd[1]: Starting dracut mount hook...
Jan 05 20:01:03 localhost systemd[1]: Finished dracut mount hook.
Jan 05 20:01:03 localhost systemd[1]: Starting dracut pre-pivot and cleanup hook...
Jan 05 20:01:03 localhost rpc.idmapd[447]: exiting on signal 15
Jan 05 20:01:03 localhost systemd[1]: var-lib-nfs-rpc_pipefs.mount: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Finished dracut pre-pivot and cleanup hook.
Jan 05 20:01:03 localhost systemd[1]: Starting Cleaning Up and Shutting Down Daemons...
Jan 05 20:01:03 localhost systemd[1]: Stopped target Network.
Jan 05 20:01:03 localhost systemd[1]: Stopped target Remote Encrypted Volumes.
Jan 05 20:01:03 localhost systemd[1]: Stopped target Timer Units.
Jan 05 20:01:03 localhost systemd[1]: dbus.socket: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Closed D-Bus System Message Bus Socket.
Jan 05 20:01:03 localhost systemd[1]: dracut-pre-pivot.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Stopped dracut pre-pivot and cleanup hook.
Jan 05 20:01:03 localhost systemd[1]: Stopped target Initrd Default Target.
Jan 05 20:01:03 localhost systemd[1]: Stopped target Basic System.
Jan 05 20:01:03 localhost systemd[1]: Stopped target Initrd Root Device.
Jan 05 20:01:03 localhost systemd[1]: Stopped target Initrd /usr File System.
Jan 05 20:01:03 localhost systemd[1]: Stopped target Path Units.
Jan 05 20:01:03 localhost systemd[1]: Stopped target Remote File Systems.
Jan 05 20:01:03 localhost systemd[1]: Stopped target Preparation for Remote File Systems.
Jan 05 20:01:03 localhost systemd[1]: Stopped target Slice Units.
Jan 05 20:01:03 localhost systemd[1]: Stopped target Socket Units.
Jan 05 20:01:03 localhost systemd[1]: Stopped target System Initialization.
Jan 05 20:01:03 localhost systemd[1]: Stopped target Local File Systems.
Jan 05 20:01:03 localhost systemd[1]: Stopped target Swaps.
Jan 05 20:01:03 localhost systemd[1]: dracut-mount.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Stopped dracut mount hook.
Jan 05 20:01:03 localhost systemd[1]: dracut-pre-mount.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Stopped dracut pre-mount hook.
Jan 05 20:01:03 localhost systemd[1]: Stopped target Local Encrypted Volumes.
Jan 05 20:01:03 localhost systemd[1]: systemd-ask-password-console.path: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Stopped Dispatch Password Requests to Console Directory Watch.
Jan 05 20:01:03 localhost systemd[1]: dracut-initqueue.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Stopped dracut initqueue hook.
Jan 05 20:01:03 localhost systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Stopped Apply Kernel Variables.
Jan 05 20:01:03 localhost systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Stopped Create Volatile Files and Directories.
Jan 05 20:01:03 localhost systemd[1]: systemd-udev-trigger.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Stopped Coldplug All udev Devices.
Jan 05 20:01:03 localhost systemd[1]: dracut-pre-trigger.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Stopped dracut pre-trigger hook.
Jan 05 20:01:03 localhost systemd[1]: Stopping Rule-based Manager for Device Events and Files...
Jan 05 20:01:03 localhost systemd[1]: systemd-vconsole-setup.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Stopped Setup Virtual Console.
Jan 05 20:01:03 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: systemd-udevd.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Stopped Rule-based Manager for Device Events and Files.
Jan 05 20:01:03 localhost systemd[1]: initrd-cleanup.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Finished Cleaning Up and Shutting Down Daemons.
Jan 05 20:01:03 localhost systemd[1]: systemd-udevd-control.socket: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Closed udev Control Socket.
Jan 05 20:01:03 localhost systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Closed udev Kernel Socket.
Jan 05 20:01:03 localhost systemd[1]: dracut-pre-udev.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Stopped dracut pre-udev hook.
Jan 05 20:01:03 localhost systemd[1]: dracut-cmdline.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Stopped dracut cmdline hook.
Jan 05 20:01:03 localhost systemd[1]: Starting Cleanup udev Database...
Jan 05 20:01:03 localhost systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Stopped Create Static Device Nodes in /dev.
Jan 05 20:01:03 localhost systemd[1]: kmod-static-nodes.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Stopped Create List of Static Device Nodes.
Jan 05 20:01:03 localhost systemd[1]: systemd-sysusers.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Stopped Create System Users.
Jan 05 20:01:03 localhost systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup\x2ddev.service.mount: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: run-credentials-systemd\x2dsysusers.service.mount: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully.
Jan 05 20:01:03 localhost systemd[1]: Finished Cleanup udev Database.
Jan 05 20:01:03 localhost systemd[1]: Reached target Switch Root.
Jan 05 20:01:03 localhost systemd[1]: Starting Switch Root...
Jan 05 20:01:03 localhost systemd[1]: Switching root.
Jan 05 20:01:03 localhost systemd-journald[304]: Journal stopped
Jan 05 20:01:04 localhost systemd-journald[304]: Received SIGTERM from PID 1 (systemd).
Jan 05 20:01:04 localhost kernel: audit: type=1404 audit(1767643263.781:2): enforcing=1 old_enforcing=0 auid=4294967295 ses=4294967295 enabled=1 old-enabled=1 lsm=selinux res=1
Jan 05 20:01:04 localhost kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 20:01:04 localhost kernel: SELinux:  policy capability open_perms=1
Jan 05 20:01:04 localhost kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 20:01:04 localhost kernel: SELinux:  policy capability always_check_network=0
Jan 05 20:01:04 localhost kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 20:01:04 localhost kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 20:01:04 localhost kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 20:01:04 localhost kernel: audit: type=1403 audit(1767643263.910:3): auid=4294967295 ses=4294967295 lsm=selinux res=1
Jan 05 20:01:04 localhost systemd[1]: Successfully loaded SELinux policy in 132.231ms.
Jan 05 20:01:04 localhost systemd[1]: Relabelled /dev, /dev/shm, /run, /sys/fs/cgroup in 26.489ms.
Jan 05 20:01:04 localhost systemd[1]: systemd 252-64.el9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT +GNUTLS +OPENSSL +ACL +BLKID +CURL +ELFUTILS +FIDO2 +IDN2 -IDN -IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY +P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK +XKBCOMMON +UTMP +SYSVINIT default-hierarchy=unified)
Jan 05 20:01:04 localhost systemd[1]: Detected virtualization kvm.
Jan 05 20:01:04 localhost systemd[1]: Detected architecture x86-64.
Jan 05 20:01:04 localhost systemd-rc-local-generator[637]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:01:04 localhost systemd[1]: initrd-switch-root.service: Deactivated successfully.
Jan 05 20:01:04 localhost systemd[1]: Stopped Switch Root.
Jan 05 20:01:04 localhost systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1.
Jan 05 20:01:04 localhost systemd[1]: Created slice Slice /system/getty.
Jan 05 20:01:04 localhost systemd[1]: Created slice Slice /system/serial-getty.
Jan 05 20:01:04 localhost systemd[1]: Created slice Slice /system/sshd-keygen.
Jan 05 20:01:04 localhost systemd[1]: Created slice User and Session Slice.
Jan 05 20:01:04 localhost systemd[1]: Started Dispatch Password Requests to Console Directory Watch.
Jan 05 20:01:04 localhost systemd[1]: Started Forward Password Requests to Wall Directory Watch.
Jan 05 20:01:04 localhost systemd[1]: Set up automount Arbitrary Executable File Formats File System Automount Point.
Jan 05 20:01:04 localhost systemd[1]: Reached target Local Encrypted Volumes.
Jan 05 20:01:04 localhost systemd[1]: Stopped target Switch Root.
Jan 05 20:01:04 localhost systemd[1]: Stopped target Initrd File Systems.
Jan 05 20:01:04 localhost systemd[1]: Stopped target Initrd Root File System.
Jan 05 20:01:04 localhost systemd[1]: Reached target Local Integrity Protected Volumes.
Jan 05 20:01:04 localhost systemd[1]: Reached target Path Units.
Jan 05 20:01:04 localhost systemd[1]: Reached target rpc_pipefs.target.
Jan 05 20:01:04 localhost systemd[1]: Reached target Slice Units.
Jan 05 20:01:04 localhost systemd[1]: Reached target Swaps.
Jan 05 20:01:04 localhost systemd[1]: Reached target Local Verity Protected Volumes.
Jan 05 20:01:04 localhost systemd[1]: Listening on RPCbind Server Activation Socket.
Jan 05 20:01:04 localhost systemd[1]: Reached target RPC Port Mapper.
Jan 05 20:01:04 localhost systemd[1]: Listening on Process Core Dump Socket.
Jan 05 20:01:04 localhost systemd[1]: Listening on initctl Compatibility Named Pipe.
Jan 05 20:01:04 localhost systemd[1]: Listening on udev Control Socket.
Jan 05 20:01:04 localhost systemd[1]: Listening on udev Kernel Socket.
Jan 05 20:01:04 localhost systemd[1]: Mounting Huge Pages File System...
Jan 05 20:01:04 localhost systemd[1]: Mounting POSIX Message Queue File System...
Jan 05 20:01:04 localhost systemd[1]: Mounting Kernel Debug File System...
Jan 05 20:01:04 localhost systemd[1]: Mounting Kernel Trace File System...
Jan 05 20:01:04 localhost systemd[1]: Kernel Module supporting RPCSEC_GSS was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 05 20:01:04 localhost systemd[1]: Starting Create List of Static Device Nodes...
Jan 05 20:01:04 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 05 20:01:04 localhost systemd[1]: Starting Load Kernel Module drm...
Jan 05 20:01:04 localhost systemd[1]: Starting Load Kernel Module efi_pstore...
Jan 05 20:01:04 localhost systemd[1]: Starting Load Kernel Module fuse...
Jan 05 20:01:04 localhost systemd[1]: Starting Read and set NIS domainname from /etc/sysconfig/network...
Jan 05 20:01:04 localhost systemd[1]: systemd-fsck-root.service: Deactivated successfully.
Jan 05 20:01:04 localhost systemd[1]: Stopped File System Check on Root Device.
Jan 05 20:01:04 localhost systemd[1]: Stopped Journal Service.
Jan 05 20:01:04 localhost kernel: fuse: init (API version 7.37)
Jan 05 20:01:04 localhost systemd[1]: Starting Journal Service...
Jan 05 20:01:04 localhost systemd[1]: Load Kernel Modules was skipped because no trigger condition checks were met.
Jan 05 20:01:04 localhost systemd[1]: Starting Generate network units from Kernel command line...
Jan 05 20:01:04 localhost systemd[1]: TPM2 PCR Machine ID Measurement was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 05 20:01:04 localhost systemd[1]: Starting Remount Root and Kernel File Systems...
Jan 05 20:01:04 localhost systemd[1]: Repartition Root Disk was skipped because no trigger condition checks were met.
Jan 05 20:01:04 localhost systemd[1]: Starting Apply Kernel Variables...
Jan 05 20:01:04 localhost kernel: xfs filesystem being remounted at / supports timestamps until 2038 (0x7fffffff)
Jan 05 20:01:04 localhost systemd[1]: Starting Coldplug All udev Devices...
Jan 05 20:01:04 localhost systemd-journald[678]: Journal started
Jan 05 20:01:04 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/f46796bb2b37cbb1d783b32fbf8770cb) is 8.0M, max 153.6M, 145.6M free.
Jan 05 20:01:04 localhost systemd[1]: Queued start job for default target Multi-User System.
Jan 05 20:01:04 localhost systemd[1]: systemd-journald.service: Deactivated successfully.
Jan 05 20:01:04 localhost systemd[1]: Started Journal Service.
Jan 05 20:01:04 localhost kernel: ACPI: bus type drm_connector registered
Jan 05 20:01:04 localhost systemd[1]: Mounted Huge Pages File System.
Jan 05 20:01:04 localhost systemd[1]: Mounted POSIX Message Queue File System.
Jan 05 20:01:04 localhost systemd[1]: Mounted Kernel Debug File System.
Jan 05 20:01:04 localhost systemd[1]: Mounted Kernel Trace File System.
Jan 05 20:01:04 localhost systemd[1]: Finished Create List of Static Device Nodes.
Jan 05 20:01:04 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 05 20:01:04 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 05 20:01:04 localhost systemd[1]: modprobe@drm.service: Deactivated successfully.
Jan 05 20:01:04 localhost systemd[1]: Finished Load Kernel Module drm.
Jan 05 20:01:04 localhost systemd[1]: modprobe@efi_pstore.service: Deactivated successfully.
Jan 05 20:01:04 localhost systemd[1]: Finished Load Kernel Module efi_pstore.
Jan 05 20:01:04 localhost systemd[1]: modprobe@fuse.service: Deactivated successfully.
Jan 05 20:01:04 localhost systemd[1]: Finished Load Kernel Module fuse.
Jan 05 20:01:04 localhost systemd[1]: Finished Read and set NIS domainname from /etc/sysconfig/network.
Jan 05 20:01:04 localhost systemd[1]: Finished Generate network units from Kernel command line.
Jan 05 20:01:04 localhost systemd[1]: Finished Remount Root and Kernel File Systems.
Jan 05 20:01:04 localhost systemd[1]: Finished Apply Kernel Variables.
Jan 05 20:01:04 localhost systemd[1]: Mounting FUSE Control File System...
Jan 05 20:01:04 localhost systemd[1]: First Boot Wizard was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 05 20:01:04 localhost systemd[1]: Starting Rebuild Hardware Database...
Jan 05 20:01:04 localhost systemd[1]: Starting Flush Journal to Persistent Storage...
Jan 05 20:01:04 localhost systemd[1]: Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore).
Jan 05 20:01:04 localhost systemd[1]: Starting Load/Save OS Random Seed...
Jan 05 20:01:04 localhost systemd[1]: Starting Create System Users...
Jan 05 20:01:04 localhost systemd[1]: Mounted FUSE Control File System.
Jan 05 20:01:04 localhost systemd-journald[678]: Runtime Journal (/run/log/journal/f46796bb2b37cbb1d783b32fbf8770cb) is 8.0M, max 153.6M, 145.6M free.
Jan 05 20:01:04 localhost systemd-journald[678]: Received client request to flush runtime journal.
Jan 05 20:01:04 localhost systemd[1]: Finished Flush Journal to Persistent Storage.
Jan 05 20:01:04 localhost systemd[1]: Finished Coldplug All udev Devices.
Jan 05 20:01:04 localhost systemd[1]: Finished Load/Save OS Random Seed.
Jan 05 20:01:04 localhost systemd[1]: First Boot Complete was skipped because of an unmet condition check (ConditionFirstBoot=yes).
Jan 05 20:01:04 localhost systemd[1]: Finished Create System Users.
Jan 05 20:01:04 localhost systemd[1]: Starting Create Static Device Nodes in /dev...
Jan 05 20:01:04 localhost systemd[1]: Finished Create Static Device Nodes in /dev.
Jan 05 20:01:04 localhost systemd[1]: Reached target Preparation for Local File Systems.
Jan 05 20:01:04 localhost systemd[1]: Reached target Local File Systems.
Jan 05 20:01:04 localhost systemd[1]: Starting Rebuild Dynamic Linker Cache...
Jan 05 20:01:04 localhost systemd[1]: Mark the need to relabel after reboot was skipped because of an unmet condition check (ConditionSecurity=!selinux).
Jan 05 20:01:04 localhost systemd[1]: Set Up Additional Binary Formats was skipped because no trigger condition checks were met.
Jan 05 20:01:04 localhost systemd[1]: Update Boot Loader Random Seed was skipped because no trigger condition checks were met.
Jan 05 20:01:04 localhost systemd[1]: Starting Automatic Boot Loader Update...
Jan 05 20:01:04 localhost systemd[1]: Commit a transient machine-id on disk was skipped because of an unmet condition check (ConditionPathIsMountPoint=/etc/machine-id).
Jan 05 20:01:04 localhost systemd[1]: Starting Create Volatile Files and Directories...
Jan 05 20:01:04 localhost bootctl[697]: Couldn't find EFI system partition, skipping.
Jan 05 20:01:04 localhost systemd[1]: Finished Automatic Boot Loader Update.
Jan 05 20:01:04 localhost systemd[1]: Finished Create Volatile Files and Directories.
Jan 05 20:01:04 localhost systemd[1]: Finished Rebuild Dynamic Linker Cache.
Jan 05 20:01:04 localhost systemd[1]: Starting Security Auditing Service...
Jan 05 20:01:04 localhost systemd[1]: Starting RPC Bind...
Jan 05 20:01:04 localhost systemd[1]: Starting Rebuild Journal Catalog...
Jan 05 20:01:04 localhost auditd[703]: audit dispatcher initialized with q_depth=2000 and 1 active plugins
Jan 05 20:01:04 localhost auditd[703]: Init complete, auditd 3.1.5 listening for events (startup state enable)
Jan 05 20:01:04 localhost systemd[1]: Finished Rebuild Journal Catalog.
Jan 05 20:01:05 localhost systemd[1]: Started RPC Bind.
Jan 05 20:01:05 localhost augenrules[708]: /sbin/augenrules: No change
Jan 05 20:01:05 localhost augenrules[724]: No rules
Jan 05 20:01:05 localhost augenrules[724]: enabled 1
Jan 05 20:01:05 localhost augenrules[724]: failure 1
Jan 05 20:01:05 localhost augenrules[724]: pid 703
Jan 05 20:01:05 localhost augenrules[724]: rate_limit 0
Jan 05 20:01:05 localhost augenrules[724]: backlog_limit 8192
Jan 05 20:01:05 localhost augenrules[724]: lost 0
Jan 05 20:01:05 localhost augenrules[724]: backlog 1
Jan 05 20:01:05 localhost augenrules[724]: backlog_wait_time 60000
Jan 05 20:01:05 localhost augenrules[724]: backlog_wait_time_actual 0
Jan 05 20:01:05 localhost augenrules[724]: enabled 1
Jan 05 20:01:05 localhost augenrules[724]: failure 1
Jan 05 20:01:05 localhost augenrules[724]: pid 703
Jan 05 20:01:05 localhost augenrules[724]: rate_limit 0
Jan 05 20:01:05 localhost augenrules[724]: backlog_limit 8192
Jan 05 20:01:05 localhost augenrules[724]: lost 0
Jan 05 20:01:05 localhost augenrules[724]: backlog 3
Jan 05 20:01:05 localhost augenrules[724]: backlog_wait_time 60000
Jan 05 20:01:05 localhost augenrules[724]: backlog_wait_time_actual 0
Jan 05 20:01:05 localhost augenrules[724]: enabled 1
Jan 05 20:01:05 localhost augenrules[724]: failure 1
Jan 05 20:01:05 localhost augenrules[724]: pid 703
Jan 05 20:01:05 localhost augenrules[724]: rate_limit 0
Jan 05 20:01:05 localhost augenrules[724]: backlog_limit 8192
Jan 05 20:01:05 localhost augenrules[724]: lost 0
Jan 05 20:01:05 localhost augenrules[724]: backlog 2
Jan 05 20:01:05 localhost augenrules[724]: backlog_wait_time 60000
Jan 05 20:01:05 localhost augenrules[724]: backlog_wait_time_actual 0
Jan 05 20:01:05 localhost systemd[1]: Started Security Auditing Service.
Jan 05 20:01:05 localhost systemd[1]: Starting Record System Boot/Shutdown in UTMP...
Jan 05 20:01:05 localhost systemd[1]: Finished Record System Boot/Shutdown in UTMP.
Jan 05 20:01:05 localhost systemd[1]: Finished Rebuild Hardware Database.
Jan 05 20:01:05 localhost systemd[1]: Starting Rule-based Manager for Device Events and Files...
Jan 05 20:01:05 localhost systemd[1]: Starting Update is Completed...
Jan 05 20:01:05 localhost systemd[1]: Finished Update is Completed.
Jan 05 20:01:05 localhost systemd-udevd[732]: Using default interface naming scheme 'rhel-9.0'.
Jan 05 20:01:05 localhost systemd[1]: Started Rule-based Manager for Device Events and Files.
Jan 05 20:01:05 localhost systemd[1]: Reached target System Initialization.
Jan 05 20:01:05 localhost systemd[1]: Started dnf makecache --timer.
Jan 05 20:01:05 localhost systemd[1]: Started Daily rotation of log files.
Jan 05 20:01:05 localhost systemd[1]: Started Daily Cleanup of Temporary Directories.
Jan 05 20:01:05 localhost systemd[1]: Reached target Timer Units.
Jan 05 20:01:05 localhost systemd[1]: Listening on D-Bus System Message Bus Socket.
Jan 05 20:01:05 localhost systemd[1]: Listening on SSSD Kerberos Cache Manager responder socket.
Jan 05 20:01:05 localhost systemd[1]: Reached target Socket Units.
Jan 05 20:01:05 localhost systemd-udevd[739]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 20:01:05 localhost systemd[1]: Starting D-Bus System Message Bus...
Jan 05 20:01:05 localhost systemd[1]: TPM2 PCR Barrier (Initialization) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 05 20:01:05 localhost systemd[1]: Condition check resulted in /dev/ttyS0 being skipped.
Jan 05 20:01:05 localhost systemd[1]: Starting Load Kernel Module configfs...
Jan 05 20:01:05 localhost systemd[1]: modprobe@configfs.service: Deactivated successfully.
Jan 05 20:01:05 localhost systemd[1]: Finished Load Kernel Module configfs.
Jan 05 20:01:05 localhost systemd[1]: Started D-Bus System Message Bus.
Jan 05 20:01:05 localhost systemd[1]: Reached target Basic System.
Jan 05 20:01:05 localhost kernel: piix4_smbus 0000:00:01.3: SMBus Host Controller at 0x700, revision 0
Jan 05 20:01:05 localhost kernel: i2c i2c-0: 1/1 memory slots populated (from DMI)
Jan 05 20:01:05 localhost kernel: i2c i2c-0: Memory type 0x07 not supported yet, not instantiating SPD
Jan 05 20:01:05 localhost dbus-broker-lau[770]: Ready
Jan 05 20:01:05 localhost systemd[1]: Starting NTP client/server...
Jan 05 20:01:05 localhost systemd[1]: Starting Cloud-init: Local Stage (pre-network)...
Jan 05 20:01:05 localhost systemd[1]: Starting Restore /run/initramfs on shutdown...
Jan 05 20:01:05 localhost systemd[1]: Starting IPv4 firewall with iptables...
Jan 05 20:01:05 localhost systemd[1]: Started irqbalance daemon.
Jan 05 20:01:05 localhost systemd[1]: Load CPU microcode update was skipped because of an unmet condition check (ConditionPathExists=/sys/devices/system/cpu/microcode/reload).
Jan 05 20:01:05 localhost systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 05 20:01:05 localhost systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 05 20:01:05 localhost systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 05 20:01:05 localhost systemd[1]: Reached target sshd-keygen.target.
Jan 05 20:01:05 localhost systemd[1]: System Security Services Daemon was skipped because no trigger condition checks were met.
Jan 05 20:01:05 localhost systemd[1]: Reached target User and Group Name Lookups.
Jan 05 20:01:05 localhost kernel: input: PC Speaker as /devices/platform/pcspkr/input/input6
Jan 05 20:01:05 localhost systemd[1]: Starting User Login Management...
Jan 05 20:01:05 localhost kernel: [drm] pci: virtio-vga detected at 0000:00:02.0
Jan 05 20:01:05 localhost systemd[1]: Finished Restore /run/initramfs on shutdown.
Jan 05 20:01:05 localhost kernel: virtio-pci 0000:00:02.0: vgaarb: deactivate vga console
Jan 05 20:01:05 localhost kernel: Console: switching to colour dummy device 80x25
Jan 05 20:01:05 localhost kernel: [drm] features: -virgl +edid -resource_blob -host_visible
Jan 05 20:01:05 localhost kernel: [drm] features: -context_init
Jan 05 20:01:05 localhost kernel: [drm] number of scanouts: 1
Jan 05 20:01:05 localhost kernel: [drm] number of cap sets: 0
Jan 05 20:01:05 localhost chronyd[798]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 05 20:01:05 localhost chronyd[798]: Loaded 0 symmetric keys
Jan 05 20:01:05 localhost chronyd[798]: Using right/UTC timezone to obtain leap second data
Jan 05 20:01:05 localhost chronyd[798]: Loaded seccomp filter (level 2)
Jan 05 20:01:05 localhost kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:02.0 on minor 0
Jan 05 20:01:05 localhost kernel: fbcon: virtio_gpudrmfb (fb0) is primary device
Jan 05 20:01:05 localhost kernel: Console: switching to colour frame buffer device 128x48
Jan 05 20:01:05 localhost kernel: virtio-pci 0000:00:02.0: [drm] fb0: virtio_gpudrmfb frame buffer device
Jan 05 20:01:05 localhost systemd[1]: Started NTP client/server.
Jan 05 20:01:05 localhost systemd-logind[788]: New seat seat0.
Jan 05 20:01:05 localhost systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 05 20:01:05 localhost systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 05 20:01:05 localhost systemd[1]: Started User Login Management.
Jan 05 20:01:05 localhost kernel: Warning: Deprecated Driver is detected: nft_compat will not be maintained in a future major release and may be disabled
Jan 05 20:01:05 localhost kernel: Warning: Deprecated Driver is detected: nft_compat_module_init will not be maintained in a future major release and may be disabled
Jan 05 20:01:05 localhost kernel: kvm_amd: TSC scaling supported
Jan 05 20:01:05 localhost kernel: kvm_amd: Nested Virtualization enabled
Jan 05 20:01:05 localhost kernel: kvm_amd: Nested Paging enabled
Jan 05 20:01:05 localhost kernel: kvm_amd: LBR virtualization supported
Jan 05 20:01:05 localhost iptables.init[781]: iptables: Applying firewall rules: [  OK  ]
Jan 05 20:01:05 localhost systemd[1]: Finished IPv4 firewall with iptables.
Jan 05 20:01:06 localhost cloud-init[840]: Cloud-init v. 24.4-8.el9 running 'init-local' at Mon, 05 Jan 2026 20:01:06 +0000. Up 6.79 seconds.
Jan 05 20:01:06 localhost kernel: ISO 9660 Extensions: Microsoft Joliet Level 3
Jan 05 20:01:06 localhost kernel: ISO 9660 Extensions: RRIP_1991A
Jan 05 20:01:06 localhost systemd[1]: run-cloud\x2dinit-tmp-tmpdtgg2bi0.mount: Deactivated successfully.
Jan 05 20:01:06 localhost systemd[1]: Starting Hostname Service...
Jan 05 20:01:06 localhost systemd[1]: Started Hostname Service.
Jan 05 20:01:06 np0005574782.novalocal systemd-hostnamed[854]: Hostname set to <np0005574782.novalocal> (static)
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: Finished Cloud-init: Local Stage (pre-network).
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: Reached target Preparation for Network.
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: Starting Network Manager...
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.6572] NetworkManager (version 1.54.2-1.el9) is starting... (boot:a742f362-63b2-484d-bd96-34f7a12572fa)
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.6579] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.6674] manager[0x5629ec891000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.6731] hostname: hostname: using hostnamed
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.6732] hostname: static hostname changed from (none) to "np0005574782.novalocal"
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.6737] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.6900] manager[0x5629ec891000]: rfkill: Wi-Fi hardware radio set enabled
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.6901] manager[0x5629ec891000]: rfkill: WWAN hardware radio set enabled
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: Listening on Load/Save RF Kill Switch Status /dev/rfkill Watch.
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7010] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7013] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7014] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7015] manager: Networking is enabled by state file
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7019] settings: Loaded settings plugin: keyfile (internal)
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7041] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7081] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7103] dhcp: init: Using DHCP client 'internal'
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7108] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7132] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7146] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7160] device (lo): Activation: starting connection 'lo' (13386405-8334-4b8c-b612-8be49be697c2)
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7178] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7184] device (eth0): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7230] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7237] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7241] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7244] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7251] device (eth0): carrier: link connected
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7258] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7271] device (eth0): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: Started Network Manager.
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7286] policy: auto-activating connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7292] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7293] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7296] manager: NetworkManager state is now CONNECTING
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7298] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: Reached target Network.
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7330] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7336] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7383] dhcp4 (eth0): state changed new lease, address=38.102.83.179
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7395] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7422] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: Starting GSSAPI Proxy Daemon...
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7495] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7499] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7510] device (lo): Activation: successful, device activated.
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7522] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7524] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7530] manager: NetworkManager state is now CONNECTED_SITE
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7534] device (eth0): Activation: successful, device activated.
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7545] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 05 20:01:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643266.7550] manager: startup complete
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: Started GSSAPI Proxy Daemon.
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: RPC security service for NFS client and server was skipped because of an unmet condition check (ConditionPathExists=/etc/krb5.keytab).
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: Reached target NFS client services.
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: Reached target Preparation for Remote File Systems.
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: Reached target Remote File Systems.
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: TPM2 PCR Barrier (User) was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/StubPcrKernelImage-4a67b082-0a4c-41cf-b6c7-440b29bb8c4f).
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 05 20:01:06 np0005574782.novalocal systemd[1]: Starting Cloud-init: Network Stage...
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: Cloud-init v. 24.4-8.el9 running 'init' at Mon, 05 Jan 2026 20:01:07 +0000. Up 7.87 seconds.
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++++++++Net device info+++++++++++++++++++++++++++++++++++++++
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: | Device |  Up  |           Address            |      Mask     | Scope  |     Hw-Address    |
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: |  eth0  | True |        38.102.83.179         | 255.255.255.0 | global | fa:16:3e:21:18:47 |
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: |  eth0  | True | fe80::f816:3eff:fe21:1847/64 |       .       |  link  | fa:16:3e:21:18:47 |
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: |   lo   | True |          127.0.0.1           |   255.0.0.0   |  host  |         .         |
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: |   lo   | True |           ::1/128            |       .       |  host  |         .         |
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: +--------+------+------------------------------+---------------+--------+-------------------+
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: +++++++++++++++++++++++++++++++++Route IPv4 info+++++++++++++++++++++++++++++++++
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: | Route |   Destination   |    Gateway    |     Genmask     | Interface | Flags |
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: |   0   |     0.0.0.0     |  38.102.83.1  |     0.0.0.0     |    eth0   |   UG  |
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: |   1   |   38.102.83.0   |    0.0.0.0    |  255.255.255.0  |    eth0   |   U   |
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: |   2   | 169.254.169.254 | 38.102.83.126 | 255.255.255.255 |    eth0   |  UGH  |
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: +-------+-----------------+---------------+-----------------+-----------+-------+
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: +++++++++++++++++++Route IPv6 info+++++++++++++++++++
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: | Route | Destination | Gateway | Interface | Flags |
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: |   1   |  fe80::/64  |    ::   |    eth0   |   U   |
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: |   3   |  multicast  |    ::   |    eth0   |   U   |
Jan 05 20:01:07 np0005574782.novalocal cloud-init[922]: ci-info: +-------+-------------+---------+-----------+-------+
Jan 05 20:01:08 np0005574782.novalocal useradd[988]: new group: name=cloud-user, GID=1001
Jan 05 20:01:08 np0005574782.novalocal useradd[988]: new user: name=cloud-user, UID=1001, GID=1001, home=/home/cloud-user, shell=/bin/bash, from=none
Jan 05 20:01:08 np0005574782.novalocal useradd[988]: add 'cloud-user' to group 'adm'
Jan 05 20:01:08 np0005574782.novalocal useradd[988]: add 'cloud-user' to group 'systemd-journal'
Jan 05 20:01:08 np0005574782.novalocal useradd[988]: add 'cloud-user' to shadow group 'adm'
Jan 05 20:01:08 np0005574782.novalocal useradd[988]: add 'cloud-user' to shadow group 'systemd-journal'
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: Generating public/private rsa key pair.
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_rsa_key
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_rsa_key.pub
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: The key fingerprint is:
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: SHA256:59xIO7tatGr+6P9pTtWXhuUqUY5JhbAlhIZ4Y3Kb5sI root@np0005574782.novalocal
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: The key's randomart image is:
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: +---[RSA 3072]----+
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |    . . o+....   |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |   o * o  +..    |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |    = =  . . . . |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |     +    . = +..|
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |  . o   S ++ o.+o|
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |   E .   * =..o .|
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |    .     O.o.   |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |        .+ +o.   |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |       +*+==+    |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: +----[SHA256]-----+
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: Generating public/private ecdsa key pair.
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ecdsa_key
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ecdsa_key.pub
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: The key fingerprint is:
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: SHA256:Y58xu8dL0RyG2xDwyaUN03eOe/8ZiG6PgQtc9pMruwY root@np0005574782.novalocal
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: The key's randomart image is:
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: +---[ECDSA 256]---+
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |          ..+..  |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |           o X. o|
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |            B ++.|
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |             B...|
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |        Soo o +. |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |      .Eooo=o.o .|
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |       o..+B.. o.|
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |        .oo+B   +|
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |        .+*=oo .o|
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: +----[SHA256]-----+
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: Generating public/private ed25519 key pair.
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: Your identification has been saved in /etc/ssh/ssh_host_ed25519_key
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: Your public key has been saved in /etc/ssh/ssh_host_ed25519_key.pub
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: The key fingerprint is:
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: SHA256:2sAeswBEo4grTXPEwUbA6RvtMenwmj3RTNeAibA82oA root@np0005574782.novalocal
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: The key's randomart image is:
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: +--[ED25519 256]--+
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: | o*B+o o         |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |++o+= o .        |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |E.Bo..   o       |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: | B+==.. . .      |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |+ oB.*=.S        |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |. . =ooB         |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |   + .+ .        |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |  o o            |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: |     .           |
Jan 05 20:01:08 np0005574782.novalocal cloud-init[922]: +----[SHA256]-----+
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Finished Cloud-init: Network Stage.
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Reached target Cloud-config availability.
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Reached target Network is Online.
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Starting Cloud-init: Config Stage...
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Starting Crash recovery kernel arming...
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Starting Notify NFS peers of a restart...
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Starting System Logging Service...
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Starting OpenSSH server daemon...
Jan 05 20:01:08 np0005574782.novalocal sm-notify[1005]: Version 2.5.4 starting
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Starting Permit User Sessions...
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Started Notify NFS peers of a restart.
Jan 05 20:01:08 np0005574782.novalocal sshd[1007]: Server listening on 0.0.0.0 port 22.
Jan 05 20:01:08 np0005574782.novalocal sshd[1007]: Server listening on :: port 22.
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Started OpenSSH server daemon.
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Finished Permit User Sessions.
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Started Command Scheduler.
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Started Getty on tty1.
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Started Serial Getty on ttyS0.
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Reached target Login Prompts.
Jan 05 20:01:08 np0005574782.novalocal crond[1010]: (CRON) STARTUP (1.5.7)
Jan 05 20:01:08 np0005574782.novalocal crond[1010]: (CRON) INFO (Syslog will be used instead of sendmail.)
Jan 05 20:01:08 np0005574782.novalocal crond[1010]: (CRON) INFO (RANDOM_DELAY will be scaled with factor 26% if used.)
Jan 05 20:01:08 np0005574782.novalocal crond[1010]: (CRON) INFO (running with inotify support)
Jan 05 20:01:08 np0005574782.novalocal rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] start
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Started System Logging Service.
Jan 05 20:01:08 np0005574782.novalocal rsyslogd[1006]: imjournal: No statefile exists, /var/lib/rsyslog/imjournal.state will be created (ignore if this is first run): No such file or directory [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2040 ]
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Reached target Multi-User System.
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Starting Record Runlevel Change in UTMP...
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: systemd-update-utmp-runlevel.service: Deactivated successfully.
Jan 05 20:01:08 np0005574782.novalocal systemd[1]: Finished Record Runlevel Change in UTMP.
Jan 05 20:01:08 np0005574782.novalocal rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 20:01:08 np0005574782.novalocal kdumpctl[1018]: kdump: No kdump initial ramdisk found.
Jan 05 20:01:08 np0005574782.novalocal kdumpctl[1018]: kdump: Rebuilding /boot/initramfs-5.14.0-654.el9.x86_64kdump.img
Jan 05 20:01:08 np0005574782.novalocal cloud-init[1101]: Cloud-init v. 24.4-8.el9 running 'modules:config' at Mon, 05 Jan 2026 20:01:08 +0000. Up 9.69 seconds.
Jan 05 20:01:09 np0005574782.novalocal systemd[1]: Finished Cloud-init: Config Stage.
Jan 05 20:01:09 np0005574782.novalocal systemd[1]: Starting Cloud-init: Final Stage...
Jan 05 20:01:09 np0005574782.novalocal cloud-init[1265]: Cloud-init v. 24.4-8.el9 running 'modules:final' at Mon, 05 Jan 2026 20:01:09 +0000. Up 10.08 seconds.
Jan 05 20:01:09 np0005574782.novalocal dracut[1269]: dracut-057-102.git20250818.el9
Jan 05 20:01:09 np0005574782.novalocal cloud-init[1286]: #############################################################
Jan 05 20:01:09 np0005574782.novalocal cloud-init[1287]: -----BEGIN SSH HOST KEY FINGERPRINTS-----
Jan 05 20:01:09 np0005574782.novalocal cloud-init[1289]: 256 SHA256:Y58xu8dL0RyG2xDwyaUN03eOe/8ZiG6PgQtc9pMruwY root@np0005574782.novalocal (ECDSA)
Jan 05 20:01:09 np0005574782.novalocal cloud-init[1291]: 256 SHA256:2sAeswBEo4grTXPEwUbA6RvtMenwmj3RTNeAibA82oA root@np0005574782.novalocal (ED25519)
Jan 05 20:01:09 np0005574782.novalocal cloud-init[1293]: 3072 SHA256:59xIO7tatGr+6P9pTtWXhuUqUY5JhbAlhIZ4Y3Kb5sI root@np0005574782.novalocal (RSA)
Jan 05 20:01:09 np0005574782.novalocal cloud-init[1294]: -----END SSH HOST KEY FINGERPRINTS-----
Jan 05 20:01:09 np0005574782.novalocal cloud-init[1295]: #############################################################
Jan 05 20:01:09 np0005574782.novalocal cloud-init[1265]: Cloud-init v. 24.4-8.el9 finished at Mon, 05 Jan 2026 20:01:09 +0000. Datasource DataSourceConfigDrive [net,ver=2][source=/dev/sr0].  Up 10.27 seconds
Jan 05 20:01:09 np0005574782.novalocal systemd[1]: Finished Cloud-init: Final Stage.
Jan 05 20:01:09 np0005574782.novalocal systemd[1]: Reached target Cloud-init target.
Jan 05 20:01:09 np0005574782.novalocal sshd-session[1299]: Connection reset by 38.102.83.114 port 57342 [preauth]
Jan 05 20:01:09 np0005574782.novalocal sshd-session[1301]: Unable to negotiate with 38.102.83.114 port 57348: no matching host key type found. Their offer: ssh-ed25519,ssh-ed25519-cert-v01@openssh.com [preauth]
Jan 05 20:01:09 np0005574782.novalocal dracut[1271]: Executing: /usr/bin/dracut --quiet --hostonly --hostonly-cmdline --hostonly-i18n --hostonly-mode strict --hostonly-nics  --mount "/dev/disk/by-uuid/f677d6a5-1bcd-4a82-bb95-263d2adaa51b /sysroot xfs rw,relatime,seclabel,attr2,inode64,logbufs=8,logbsize=32k,noquota" --squash-compressor zstd --no-hostonly-default-device --add-confdir /lib/kdump/dracut.conf.d -f /boot/initramfs-5.14.0-654.el9.x86_64kdump.img 5.14.0-654.el9.x86_64
Jan 05 20:01:09 np0005574782.novalocal sshd-session[1320]: Unable to negotiate with 38.102.83.114 port 57374: no matching host key type found. Their offer: ecdsa-sha2-nistp384,ecdsa-sha2-nistp384-cert-v01@openssh.com [preauth]
Jan 05 20:01:09 np0005574782.novalocal sshd-session[1327]: Unable to negotiate with 38.102.83.114 port 57376: no matching host key type found. Their offer: ecdsa-sha2-nistp521,ecdsa-sha2-nistp521-cert-v01@openssh.com [preauth]
Jan 05 20:01:09 np0005574782.novalocal sshd-session[1307]: Connection closed by 38.102.83.114 port 57360 [preauth]
Jan 05 20:01:09 np0005574782.novalocal sshd-session[1348]: Connection reset by 38.102.83.114 port 57392 [preauth]
Jan 05 20:01:09 np0005574782.novalocal sshd-session[1361]: Unable to negotiate with 38.102.83.114 port 57398: no matching host key type found. Their offer: ssh-rsa,ssh-rsa-cert-v01@openssh.com [preauth]
Jan 05 20:01:09 np0005574782.novalocal sshd-session[1367]: Unable to negotiate with 38.102.83.114 port 57406: no matching host key type found. Their offer: ssh-dss,ssh-dss-cert-v01@openssh.com [preauth]
Jan 05 20:01:09 np0005574782.novalocal sshd-session[1334]: Connection closed by 38.102.83.114 port 57378 [preauth]
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command 'networkctl' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'systemd-networkd' will not be installed, because command '/usr/lib/systemd/systemd-networkd-wait-online' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: Module 'ifcfg' will not be installed, because it's in the list to be omitted!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: Module 'plymouth' will not be installed, because it's in the list to be omitted!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: Module 'resume' will not be installed, because it's in the list to be omitted!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: dracut module 'biosdevname' will not be installed, because command 'biosdevname' could not be found!
Jan 05 20:01:10 np0005574782.novalocal dracut[1271]: Module 'earlykdump' will not be installed, because it's in the list to be omitted!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: memstrack is not available
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command 'resolvectl' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'systemd-resolved' will not be installed, because command '/usr/lib/systemd/systemd-resolved' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-timesyncd' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'systemd-timesyncd' will not be installed, because command '/usr/lib/systemd/systemd-time-wait-sync' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'busybox' will not be installed, because command 'busybox' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'dbus-daemon' will not be installed, because command 'dbus-daemon' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'rngd' will not be installed, because command 'rngd' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'connman' will not be installed, because command 'connmanctl' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'connman' will not be installed, because command 'connmand-wait-online' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'network-wicked' will not be installed, because command 'wicked' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: 62bluetooth: Could not find any command of '/usr/lib/bluetooth/bluetoothd /usr/libexec/bluetooth/bluetoothd'!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'lvmmerge' will not be installed, because command 'lvm' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'lvmthinpool-monitor' will not be installed, because command 'lvm' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'btrfs' will not be installed, because command 'btrfs' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'dmraid' will not be installed, because command 'dmraid' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'lvm' will not be installed, because command 'lvm' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'mdraid' will not be installed, because command 'mdadm' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'pcsc' will not be installed, because command 'pcscd' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'tpm2-tss' will not be installed, because command 'tpm2' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'cifs' will not be installed, because command 'mount.cifs' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsi-iname' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsiadm' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'iscsi' will not be installed, because command 'iscsid' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'nvmf' will not be installed, because command 'nvme' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: dracut module 'memstrack' will not be installed, because command 'memstrack' could not be found!
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: memstrack is not available
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: If you need to use rd.memdebug>=4, please install memstrack and procps-ng
Jan 05 20:01:11 np0005574782.novalocal dracut[1271]: *** Including module: systemd ***
Jan 05 20:01:11 np0005574782.novalocal chronyd[798]: Selected source 167.160.187.12 (2.centos.pool.ntp.org)
Jan 05 20:01:11 np0005574782.novalocal chronyd[798]: System clock TAI offset set to 37 seconds
Jan 05 20:01:12 np0005574782.novalocal dracut[1271]: *** Including module: fips ***
Jan 05 20:01:12 np0005574782.novalocal dracut[1271]: *** Including module: systemd-initrd ***
Jan 05 20:01:12 np0005574782.novalocal dracut[1271]: *** Including module: i18n ***
Jan 05 20:01:12 np0005574782.novalocal dracut[1271]: *** Including module: drm ***
Jan 05 20:01:13 np0005574782.novalocal dracut[1271]: *** Including module: prefixdevname ***
Jan 05 20:01:13 np0005574782.novalocal dracut[1271]: *** Including module: kernel-modules ***
Jan 05 20:01:13 np0005574782.novalocal kernel: block vda: the capability attribute has been deprecated.
Jan 05 20:01:14 np0005574782.novalocal dracut[1271]: *** Including module: kernel-modules-extra ***
Jan 05 20:01:14 np0005574782.novalocal dracut[1271]:   kernel-modules-extra: configuration source "/run/depmod.d" does not exist
Jan 05 20:01:14 np0005574782.novalocal dracut[1271]:   kernel-modules-extra: configuration source "/lib/depmod.d" does not exist
Jan 05 20:01:14 np0005574782.novalocal dracut[1271]:   kernel-modules-extra: parsing configuration file "/etc/depmod.d/dist.conf"
Jan 05 20:01:14 np0005574782.novalocal dracut[1271]:   kernel-modules-extra: /etc/depmod.d/dist.conf: added "updates extra built-in weak-updates" to the list of search directories
Jan 05 20:01:14 np0005574782.novalocal dracut[1271]: *** Including module: qemu ***
Jan 05 20:01:14 np0005574782.novalocal dracut[1271]: *** Including module: fstab-sys ***
Jan 05 20:01:14 np0005574782.novalocal dracut[1271]: *** Including module: rootfs-block ***
Jan 05 20:01:14 np0005574782.novalocal dracut[1271]: *** Including module: terminfo ***
Jan 05 20:01:14 np0005574782.novalocal dracut[1271]: *** Including module: udev-rules ***
Jan 05 20:01:15 np0005574782.novalocal dracut[1271]: Skipping udev rule: 91-permissions.rules
Jan 05 20:01:15 np0005574782.novalocal dracut[1271]: Skipping udev rule: 80-drivers-modprobe.rules
Jan 05 20:01:15 np0005574782.novalocal dracut[1271]: *** Including module: virtiofs ***
Jan 05 20:01:15 np0005574782.novalocal dracut[1271]: *** Including module: dracut-systemd ***
Jan 05 20:01:15 np0005574782.novalocal dracut[1271]: *** Including module: usrmount ***
Jan 05 20:01:15 np0005574782.novalocal dracut[1271]: *** Including module: base ***
Jan 05 20:01:15 np0005574782.novalocal dracut[1271]: *** Including module: fs-lib ***
Jan 05 20:01:15 np0005574782.novalocal dracut[1271]: *** Including module: kdumpbase ***
Jan 05 20:01:16 np0005574782.novalocal irqbalance[782]: Cannot change IRQ 25 affinity: Operation not permitted
Jan 05 20:01:16 np0005574782.novalocal irqbalance[782]: IRQ 25 affinity is now unmanaged
Jan 05 20:01:16 np0005574782.novalocal irqbalance[782]: Cannot change IRQ 31 affinity: Operation not permitted
Jan 05 20:01:16 np0005574782.novalocal irqbalance[782]: IRQ 31 affinity is now unmanaged
Jan 05 20:01:16 np0005574782.novalocal irqbalance[782]: Cannot change IRQ 28 affinity: Operation not permitted
Jan 05 20:01:16 np0005574782.novalocal irqbalance[782]: IRQ 28 affinity is now unmanaged
Jan 05 20:01:16 np0005574782.novalocal irqbalance[782]: Cannot change IRQ 32 affinity: Operation not permitted
Jan 05 20:01:16 np0005574782.novalocal irqbalance[782]: IRQ 32 affinity is now unmanaged
Jan 05 20:01:16 np0005574782.novalocal irqbalance[782]: Cannot change IRQ 30 affinity: Operation not permitted
Jan 05 20:01:16 np0005574782.novalocal irqbalance[782]: IRQ 30 affinity is now unmanaged
Jan 05 20:01:16 np0005574782.novalocal irqbalance[782]: Cannot change IRQ 29 affinity: Operation not permitted
Jan 05 20:01:16 np0005574782.novalocal irqbalance[782]: IRQ 29 affinity is now unmanaged
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]: *** Including module: microcode_ctl-fw_dir_override ***
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:   microcode_ctl module: mangling fw_dir
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:     microcode_ctl: reset fw_dir to "/lib/firmware/updates /lib/firmware"
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel"...
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:     microcode_ctl: configuration "intel" is ignored
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-2d-07"...
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-2d-07" is ignored
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4e-03"...
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-4e-03" is ignored
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-4f-01"...
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-4f-01" is ignored
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-55-04"...
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-55-04" is ignored
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-5e-03"...
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-5e-03" is ignored
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8c-01"...
Jan 05 20:01:16 np0005574782.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-8c-01" is ignored
Jan 05 20:01:16 np0005574782.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-0xca"...
Jan 05 20:01:17 np0005574782.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-8e-9e-0x-0xca" is ignored
Jan 05 20:01:17 np0005574782.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8e-9e-0x-dell"...
Jan 05 20:01:17 np0005574782.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-8e-9e-0x-dell" is ignored
Jan 05 20:01:17 np0005574782.novalocal dracut[1271]:     microcode_ctl: processing data directory  "/usr/share/microcode_ctl/ucode_with_caveats/intel-06-8f-08"...
Jan 05 20:01:17 np0005574782.novalocal dracut[1271]:     microcode_ctl: configuration "intel-06-8f-08" is ignored
Jan 05 20:01:17 np0005574782.novalocal dracut[1271]:     microcode_ctl: final fw_dir: "/lib/firmware/updates /lib/firmware"
Jan 05 20:01:17 np0005574782.novalocal dracut[1271]: *** Including module: openssl ***
Jan 05 20:01:17 np0005574782.novalocal dracut[1271]: *** Including module: shutdown ***
Jan 05 20:01:17 np0005574782.novalocal dracut[1271]: *** Including module: squash ***
Jan 05 20:01:17 np0005574782.novalocal dracut[1271]: *** Including modules done ***
Jan 05 20:01:17 np0005574782.novalocal dracut[1271]: *** Installing kernel module dependencies ***
Jan 05 20:01:18 np0005574782.novalocal dracut[1271]: *** Installing kernel module dependencies done ***
Jan 05 20:01:18 np0005574782.novalocal dracut[1271]: *** Resolving executable dependencies ***
Jan 05 20:01:20 np0005574782.novalocal dracut[1271]: *** Resolving executable dependencies done ***
Jan 05 20:01:20 np0005574782.novalocal dracut[1271]: *** Generating early-microcode cpio image ***
Jan 05 20:01:20 np0005574782.novalocal dracut[1271]: *** Store current command line parameters ***
Jan 05 20:01:20 np0005574782.novalocal dracut[1271]: Stored kernel commandline:
Jan 05 20:01:20 np0005574782.novalocal dracut[1271]: No dracut internal kernel commandline stored in the initramfs
Jan 05 20:01:20 np0005574782.novalocal dracut[1271]: *** Install squash loader ***
Jan 05 20:01:21 np0005574782.novalocal dracut[1271]: *** Squashing the files inside the initramfs ***
Jan 05 20:01:22 np0005574782.novalocal dracut[1271]: *** Squashing the files inside the initramfs done ***
Jan 05 20:01:22 np0005574782.novalocal dracut[1271]: *** Creating image file '/boot/initramfs-5.14.0-654.el9.x86_64kdump.img' ***
Jan 05 20:01:22 np0005574782.novalocal dracut[1271]: *** Hardlinking files ***
Jan 05 20:01:22 np0005574782.novalocal dracut[1271]: Mode:           real
Jan 05 20:01:22 np0005574782.novalocal dracut[1271]: Files:          50
Jan 05 20:01:22 np0005574782.novalocal dracut[1271]: Linked:         0 files
Jan 05 20:01:22 np0005574782.novalocal dracut[1271]: Compared:       0 xattrs
Jan 05 20:01:22 np0005574782.novalocal dracut[1271]: Compared:       0 files
Jan 05 20:01:22 np0005574782.novalocal dracut[1271]: Saved:          0 B
Jan 05 20:01:22 np0005574782.novalocal dracut[1271]: Duration:       0.000994 seconds
Jan 05 20:01:22 np0005574782.novalocal dracut[1271]: *** Hardlinking files done ***
Jan 05 20:01:22 np0005574782.novalocal dracut[1271]: *** Creating initramfs image file '/boot/initramfs-5.14.0-654.el9.x86_64kdump.img' done ***
Jan 05 20:01:23 np0005574782.novalocal kdumpctl[1018]: kdump: kexec: loaded kdump kernel
Jan 05 20:01:23 np0005574782.novalocal kdumpctl[1018]: kdump: Starting kdump: [OK]
Jan 05 20:01:23 np0005574782.novalocal systemd[1]: Finished Crash recovery kernel arming.
Jan 05 20:01:23 np0005574782.novalocal systemd[1]: Startup finished in 1.772s (kernel) + 2.769s (initrd) + 19.713s (userspace) = 24.255s.
Jan 05 20:01:24 np0005574782.novalocal sshd-session[4296]: Accepted publickey for zuul from 38.102.83.114 port 49638 ssh2: RSA SHA256:zhs3MiW0JhxzckYcMHQES8SMYHj1iGcomnyzmbiwor8
Jan 05 20:01:24 np0005574782.novalocal systemd[1]: Created slice User Slice of UID 1000.
Jan 05 20:01:24 np0005574782.novalocal systemd[1]: Starting User Runtime Directory /run/user/1000...
Jan 05 20:01:24 np0005574782.novalocal systemd-logind[788]: New session 1 of user zuul.
Jan 05 20:01:24 np0005574782.novalocal systemd[1]: Finished User Runtime Directory /run/user/1000.
Jan 05 20:01:24 np0005574782.novalocal systemd[1]: Starting User Manager for UID 1000...
Jan 05 20:01:24 np0005574782.novalocal systemd[4300]: pam_unix(systemd-user:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:01:24 np0005574782.novalocal systemd[4300]: Queued start job for default target Main User Target.
Jan 05 20:01:24 np0005574782.novalocal systemd[4300]: Created slice User Application Slice.
Jan 05 20:01:24 np0005574782.novalocal systemd[4300]: Started Mark boot as successful after the user session has run 2 minutes.
Jan 05 20:01:24 np0005574782.novalocal systemd[4300]: Started Daily Cleanup of User's Temporary Directories.
Jan 05 20:01:24 np0005574782.novalocal systemd[4300]: Reached target Paths.
Jan 05 20:01:24 np0005574782.novalocal systemd[4300]: Reached target Timers.
Jan 05 20:01:24 np0005574782.novalocal systemd[4300]: Starting D-Bus User Message Bus Socket...
Jan 05 20:01:24 np0005574782.novalocal systemd[4300]: Starting Create User's Volatile Files and Directories...
Jan 05 20:01:24 np0005574782.novalocal systemd[4300]: Listening on D-Bus User Message Bus Socket.
Jan 05 20:01:24 np0005574782.novalocal systemd[4300]: Reached target Sockets.
Jan 05 20:01:24 np0005574782.novalocal systemd[4300]: Finished Create User's Volatile Files and Directories.
Jan 05 20:01:24 np0005574782.novalocal systemd[4300]: Reached target Basic System.
Jan 05 20:01:24 np0005574782.novalocal systemd[4300]: Reached target Main User Target.
Jan 05 20:01:24 np0005574782.novalocal systemd[4300]: Startup finished in 172ms.
Jan 05 20:01:24 np0005574782.novalocal systemd[1]: Started User Manager for UID 1000.
Jan 05 20:01:24 np0005574782.novalocal systemd[1]: Started Session 1 of User zuul.
Jan 05 20:01:24 np0005574782.novalocal sshd-session[4296]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:01:25 np0005574782.novalocal python3[4382]: ansible-setup Invoked with gather_subset=['!all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:01:27 np0005574782.novalocal python3[4410]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:01:34 np0005574782.novalocal python3[4468]: ansible-setup Invoked with gather_subset=['network'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:01:35 np0005574782.novalocal python3[4508]: ansible-zuul_console Invoked with path=/tmp/console-{log_uuid}.log port=19885 state=present
Jan 05 20:01:36 np0005574782.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 05 20:01:37 np0005574782.novalocal python3[4536]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDIrP/TozxaY8mjCe9CF4VdW5NESKSZ7tJTB9WaIqAqxavCYSkQsDkOSjSrlAO4qrPsQfhjkpJkDRnyYQ1OlpwwOwII0PU2yF6JMXdPXcD6I75OrXYs7x2LPZvOgNXBbFSVSjI1i+Q5ZLQjU3dHGENW/0aoE0rcBuZF9wP1HIMir78xqCn+G/vdOngx514U7VEfNYq9qmCC41eJoig0cZ9EaS1OrBXKMpwbQ9w0IHciGFSOx+Z/bifETCi5NjIND7B3SID6PXLpw8uXgLPdrofW6gUaSl7XYxXYWItxyZjz09j8lg506SS9e1pAb3BC/19Z2rla0WoWKP0Oy/Wc0SyprbfoOlLctm9TrgNAhEciEXUv3UH9boVdAHupJ7tCA9P2A9vhEvTUqd3/M2OYG8Ci+TUDdqv2sGb0rMHD3Jme0tIDtcX6Uaxz45tpQ6ksdV/p3752PzAQbssO6kQVPfKoSg94HnhcnH000wE4gRZ1skf7is4bX0P3R1QJBdMmqEs= zuul-build-sshkey manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:38 np0005574782.novalocal python3[4560]: ansible-file Invoked with state=directory path=/home/zuul/.ssh mode=448 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:01:38 np0005574782.novalocal python3[4659]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:01:38 np0005574782.novalocal python3[4730]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767643298.1906128-207-142556225605040/source dest=/home/zuul/.ssh/id_rsa mode=384 force=False _original_basename=038d726625214298bc48939cf7e004c1_id_rsa follow=False checksum=a791b004d75563e6ed4fc785bdb338f20a9e11a3 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:01:39 np0005574782.novalocal python3[4853]: ansible-ansible.legacy.stat Invoked with path=/home/zuul/.ssh/id_rsa.pub follow=False get_checksum=False checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:01:39 np0005574782.novalocal python3[4924]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767643299.2661507-240-17128043818333/source dest=/home/zuul/.ssh/id_rsa.pub mode=420 force=False _original_basename=038d726625214298bc48939cf7e004c1_id_rsa.pub follow=False checksum=d279bcd2d90cd33f22d804376bee056a37665b74 backup=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:01:41 np0005574782.novalocal python3[4972]: ansible-ping Invoked with data=pong
Jan 05 20:01:42 np0005574782.novalocal python3[4996]: ansible-setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:01:44 np0005574782.novalocal python3[5054]: ansible-zuul_debug_info Invoked with ipv4_route_required=False ipv6_route_required=False image_manifest_files=['/etc/dib-builddate.txt', '/etc/image-hostname.txt'] image_manifest=None traceroute_host=None
Jan 05 20:01:45 np0005574782.novalocal python3[5086]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:01:46 np0005574782.novalocal python3[5110]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:01:46 np0005574782.novalocal python3[5134]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:01:46 np0005574782.novalocal python3[5158]: ansible-file Invoked with path=/home/zuul/zuul-output/logs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:01:46 np0005574782.novalocal python3[5182]: ansible-file Invoked with path=/home/zuul/zuul-output/artifacts state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:01:47 np0005574782.novalocal python3[5206]: ansible-file Invoked with path=/home/zuul/zuul-output/docs state=directory mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:01:48 np0005574782.novalocal sudo[5230]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hvlmklcfsflrcaumbvndssgupcnkvqrz ; /usr/bin/python3'
Jan 05 20:01:48 np0005574782.novalocal sudo[5230]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:01:48 np0005574782.novalocal python3[5232]: ansible-file Invoked with path=/etc/ci state=directory owner=root group=root mode=493 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:01:48 np0005574782.novalocal sudo[5230]: pam_unix(sudo:session): session closed for user root
Jan 05 20:01:49 np0005574782.novalocal sudo[5308]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxeymijpwwjhkhpfdvdvewnohxgqxpns ; /usr/bin/python3'
Jan 05 20:01:49 np0005574782.novalocal sudo[5308]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:01:49 np0005574782.novalocal python3[5310]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/mirror_info.sh follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:01:49 np0005574782.novalocal sudo[5308]: pam_unix(sudo:session): session closed for user root
Jan 05 20:01:49 np0005574782.novalocal sudo[5381]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zihejauenihwjuvuevodfwmckwnrfrlm ; /usr/bin/python3'
Jan 05 20:01:49 np0005574782.novalocal sudo[5381]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:01:49 np0005574782.novalocal python3[5383]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/mirror_info.sh owner=root group=root mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1767643308.913557-21-273398403470232/source follow=False _original_basename=mirror_info.sh.j2 checksum=92d92a03afdddee82732741071f662c729080c35 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:01:49 np0005574782.novalocal sudo[5381]: pam_unix(sudo:session): session closed for user root
Jan 05 20:01:50 np0005574782.novalocal python3[5431]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAABIwAAAQEA4Z/c9osaGGtU6X8fgELwfj/yayRurfcKA0HMFfdpPxev2dbwljysMuzoVp4OZmW1gvGtyYPSNRvnzgsaabPNKNo2ym5NToCP6UM+KSe93aln4BcM/24mXChYAbXJQ5Bqq/pIzsGs/pKetQN+vwvMxLOwTvpcsCJBXaa981RKML6xj9l/UZ7IIq1HSEKMvPLxZMWdu0Ut8DkCd5F4nOw9Wgml2uYpDCj5LLCrQQ9ChdOMz8hz6SighhNlRpPkvPaet3OXxr/ytFMu7j7vv06CaEnuMMiY2aTWN1Imin9eHAylIqFHta/3gFfQSWt9jXM7owkBLKL7ATzhaAn+fjNupw== arxcruz@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:50 np0005574782.novalocal python3[5455]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQDS4Fn6k4deCnIlOtLWqZJyksbepjQt04j8Ed8CGx9EKkj0fKiAxiI4TadXQYPuNHMixZy4Nevjb6aDhL5Z906TfvNHKUrjrG7G26a0k8vdc61NEQ7FmcGMWRLwwc6ReDO7lFpzYKBMk4YqfWgBuGU/K6WLKiVW2cVvwIuGIaYrE1OiiX0iVUUk7KApXlDJMXn7qjSYynfO4mF629NIp8FJal38+Kv+HA+0QkE5Y2xXnzD4Lar5+keymiCHRntPppXHeLIRzbt0gxC7v3L72hpQ3BTBEzwHpeS8KY+SX1y5lRMN45thCHfJqGmARJREDjBvWG8JXOPmVIKQtZmVcD5b mandreou@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:51 np0005574782.novalocal python3[5479]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC9MiLfy30deHA7xPOAlew5qUq3UP2gmRMYJi8PtkjFB20/DKeWwWNnkZPqP9AayruRoo51SIiVg870gbZE2jYl+Ncx/FYDe56JeC3ySZsXoAVkC9bP7gkOGqOmJjirvAgPMI7bogVz8i+66Q4Ar7OKTp3762G4IuWPPEg4ce4Y7lx9qWocZapHYq4cYKMxrOZ7SEbFSATBbe2bPZAPKTw8do/Eny+Hq/LkHFhIeyra6cqTFQYShr+zPln0Cr+ro/pDX3bB+1ubFgTpjpkkkQsLhDfR6cCdCWM2lgnS3BTtYj5Ct9/JRPR5YOphqZz+uB+OEu2IL68hmU9vNTth1KeX rlandy@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:51 np0005574782.novalocal python3[5503]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIFCbgz8gdERiJlk2IKOtkjQxEXejrio6ZYMJAVJYpOIp raukadah@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:51 np0005574782.novalocal python3[5527]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIBqb3Q/9uDf4LmihQ7xeJ9gA/STIQUFPSfyyV0m8AoQi bshewale@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:51 np0005574782.novalocal python3[5551]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABAQC0I8QqQx0Az2ysJt2JuffucLijhBqnsXKEIx5GyHwxVULROa8VtNFXUDH6ZKZavhiMcmfHB2+TBTda+lDP4FldYj06dGmzCY+IYGa+uDRdxHNGYjvCfLFcmLlzRK6fNbTcui+KlUFUdKe0fb9CRoGKyhlJD5GRkM1Dv+Yb6Bj+RNnmm1fVGYxzmrD2utvffYEb0SZGWxq2R9gefx1q/3wCGjeqvufEV+AskPhVGc5T7t9eyZ4qmslkLh1/nMuaIBFcr9AUACRajsvk6mXrAN1g3HlBf2gQlhi1UEyfbqIQvzzFtsbLDlSum/KmKjy818GzvWjERfQ0VkGzCd9bSLVL dviroel@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:52 np0005574782.novalocal python3[5575]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDLOQd4ZLtkZXQGY6UwAr/06ppWQK4fDO3HaqxPk98csyOCBXsliSKK39Bso828+5srIXiW7aI6aC9P5mwi4mUZlGPfJlQbfrcGvY+b/SocuvaGK+1RrHLoJCT52LBhwgrzlXio2jeksZeein8iaTrhsPrOAs7KggIL/rB9hEiB3NaOPWhhoCP4vlW6MEMExGcqB/1FVxXFBPnLkEyW0Lk7ycVflZl2ocRxbfjZi0+tI1Wlinp8PvSQSc/WVrAcDgKjc/mB4ODPOyYy3G8FHgfMsrXSDEyjBKgLKMsdCrAUcqJQWjkqXleXSYOV4q3pzL+9umK+q/e3P/bIoSFQzmJKTU1eDfuvPXmow9F5H54fii/Da7ezlMJ+wPGHJrRAkmzvMbALy7xwswLhZMkOGNtRcPqaKYRmIBKpw3o6bCTtcNUHOtOQnzwY8JzrM2eBWJBXAANYw+9/ho80JIiwhg29CFNpVBuHbql2YxJQNrnl90guN65rYNpDxdIluweyUf8= anbanerj@kaermorhen manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:52 np0005574782.novalocal python3[5599]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQC3VwV8Im9kRm49lt3tM36hj4Zv27FxGo4C1Q/0jqhzFmHY7RHbmeRr8ObhwWoHjXSozKWg8FL5ER0z3hTwL0W6lez3sL7hUaCmSuZmG5Hnl3x4vTSxDI9JZ/Y65rtYiiWQo2fC5xJhU/4+0e5e/pseCm8cKRSu+SaxhO+sd6FDojA2x1BzOzKiQRDy/1zWGp/cZkxcEuB1wHI5LMzN03c67vmbu+fhZRAUO4dQkvcnj2LrhQtpa+ytvnSjr8icMDosf1OsbSffwZFyHB/hfWGAfe0eIeSA2XPraxiPknXxiPKx2MJsaUTYbsZcm3EjFdHBBMumw5rBI74zLrMRvCO9GwBEmGT4rFng1nP+yw5DB8sn2zqpOsPg1LYRwCPOUveC13P6pgsZZPh812e8v5EKnETct+5XI3dVpdw6CnNiLwAyVAF15DJvBGT/u1k0Myg/bQn+Gv9k2MSj6LvQmf6WbZu2Wgjm30z3FyCneBqTL7mLF19YXzeC0ufHz5pnO1E= dasm@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:52 np0005574782.novalocal python3[5623]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIHUnwjB20UKmsSed9X73eGNV5AOEFccQ3NYrRW776pEk cjeanner manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:53 np0005574782.novalocal python3[5647]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDercCMGn8rW1C4P67tHgtflPdTeXlpyUJYH+6XDd2lR jgilaber@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:53 np0005574782.novalocal python3[5671]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIAMI6kkg9Wg0sG7jIJmyZemEBwUn1yzNpQQd3gnulOmZ adrianfuscoarnejo@gmail.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:54 np0005574782.novalocal python3[5695]: ansible-authorized_key Invoked with user=zuul state=present key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBPijwpQu/3jhhhBZInXNOLEH57DrknPc3PLbsRvYyJIFzwYjX+WD4a7+nGnMYS42MuZk6TJcVqgnqofVx4isoD4= ramishra@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:54 np0005574782.novalocal python3[5719]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIGpU/BepK3qX0NRf5Np+dOBDqzQEefhNrw2DCZaH3uWW rebtoor@monolith manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:54 np0005574782.novalocal python3[5743]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDK0iKdi8jQTpQrDdLVH/AAgLVYyTXF7AQ1gjc/5uT3t ykarel@yatinkarel manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:54 np0005574782.novalocal python3[5767]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIF/V/cLotA6LZeO32VL45Hd78skuA2lJA425Sm2LlQeZ fmount@horcrux manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:55 np0005574782.novalocal python3[5791]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIDa7QCjuDMVmRPo1rREbGwzYeBCYVN+Ou/3WKXZEC6Sr manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:55 np0005574782.novalocal python3[5815]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQCfNtF7NvKl915TGsGGoseUb06Hj8L/S4toWf0hExeY+F00woL6NvBlJD0nDct+P5a22I4EhvoQCRQ8reaPCm1lybR3uiRIJsj+8zkVvLwby9LXzfZorlNG9ofjd00FEmB09uW/YvTl6Q9XwwwX6tInzIOv3TMqTHHGOL74ibbj8J/FJR0cFEyj0z4WQRvtkh32xAHl83gbuINryMt0sqRI+clj2381NKL55DRLQrVw0gsfqqxiHAnXg21qWmc4J+b9e9kiuAFQjcjwTVkwJCcg3xbPwC/qokYRby/Y5S40UUd7/jEARGXT7RZgpzTuDd1oZiCVrnrqJNPaMNdVv5MLeFdf1B7iIe5aa/fGouX7AO4SdKhZUdnJmCFAGvjC6S3JMZ2wAcUl+OHnssfmdj7XL50cLo27vjuzMtLAgSqi6N99m92WCF2s8J9aVzszX7Xz9OKZCeGsiVJp3/NdABKzSEAyM9xBD/5Vho894Sav+otpySHe3p6RUTgbB5Zu8VyZRZ/UtB3ueXxyo764yrc6qWIDqrehm84Xm9g+/jpIBzGPl07NUNJpdt/6Sgf9RIKXw/7XypO5yZfUcuFNGTxLfqjTNrtgLZNcjfav6sSdVXVcMPL//XNuRdKmVFaO76eV/oGMQGr1fGcCD+N+CpI7+Q+fCNB6VFWG4nZFuI/Iuw== averdagu@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:55 np0005574782.novalocal python3[5839]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDq8l27xI+QlQVdS4djp9ogSoyrNE2+Ox6vKPdhSNL1J3PE5w+WCSvMz9A5gnNuH810zwbekEApbxTze/gLQJwBHA52CChfURpXrFaxY7ePXRElwKAL3mJfzBWY/c5jnNL9TCVmFJTGZkFZP3Nh+BMgZvL6xBkt3WKm6Uq18qzd9XeKcZusrA+O+uLv1fVeQnadY9RIqOCyeFYCzLWrUfTyE8x/XG0hAWIM7qpnF2cALQS2h9n4hW5ybiUN790H08wf9hFwEf5nxY9Z9dVkPFQiTSGKNBzmnCXU9skxS/xhpFjJ5duGSZdtAHe9O+nGZm9c67hxgtf8e5PDuqAdXEv2cf6e3VBAt+Bz8EKI3yosTj0oZHfwr42Yzb1l/SKy14Rggsrc9KAQlrGXan6+u2jcQqqx7l+SWmnpFiWTV9u5cWj2IgOhApOitmRBPYqk9rE2usfO0hLn/Pj/R/Nau4803e1/EikdLE7Ps95s9mX5jRDjAoUa2JwFF5RsVFyL910= ashigupt@ashigupt.remote.csb manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:56 np0005574782.novalocal python3[5863]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIOKLl0NYKwoZ/JY5KeZU8VwRAggeOxqQJeoqp3dsAaY9 manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:56 np0005574782.novalocal python3[5887]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIASASQOH2BcOyLKuuDOdWZlPi2orcjcA8q4400T73DLH evallesp@fedora manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:56 np0005574782.novalocal python3[5911]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAILeBWlamUph+jRKV2qrx1PGU7vWuGIt5+z9k96I8WehW amsinha@amsinha-mac manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:57 np0005574782.novalocal python3[5935]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIANvVgvJBlK3gb1yz5uef/JqIGq4HLEmY2dYA8e37swb morenod@redhat-laptop manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:57 np0005574782.novalocal python3[5959]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAACAQDZdI7t1cxYx65heVI24HTV4F7oQLW1zyfxHreL2TIJKxjyrUUKIFEUmTutcBlJRLNT2Eoix6x1sOw9YrchloCLcn//SGfTElr9mSc5jbjb7QXEU+zJMhtxyEJ1Po3CUGnj7ckiIXw7wcawZtrEOAQ9pH3ExYCJcEMiyNjRQZCxT3tPK+S4B95EWh5Fsrz9CkwpjNRPPH7LigCeQTM3Wc7r97utAslBUUvYceDSLA7rMgkitJE38b7rZBeYzsGQ8YYUBjTCtehqQXxCRjizbHWaaZkBU+N3zkKB6n/iCNGIO690NK7A/qb6msTijiz1PeuM8ThOsi9qXnbX5v0PoTpcFSojV7NHAQ71f0XXuS43FhZctT+Dcx44dT8Fb5vJu2cJGrk+qF8ZgJYNpRS7gPg0EG2EqjK7JMf9ULdjSu0r+KlqIAyLvtzT4eOnQipoKlb/WG5D/0ohKv7OMQ352ggfkBFIQsRXyyTCT98Ft9juqPuahi3CAQmP4H9dyE+7+Kz437PEtsxLmfm6naNmWi7Ee1DqWPwS8rEajsm4sNM4wW9gdBboJQtc0uZw0DfLj1I9r3Mc8Ol0jYtz0yNQDSzVLrGCaJlC311trU70tZ+ZkAVV6Mn8lOhSbj1cK0lvSr6ZK4dgqGl3I1eTZJJhbLNdg7UOVaiRx9543+C/p/As7w== brjackma@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:57 np0005574782.novalocal python3[5983]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIKwedoZ0TWPJX/z/4TAbO/kKcDZOQVgRH0hAqrL5UCI1 vcastell@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:57 np0005574782.novalocal python3[6007]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIEmv8sE8GCk6ZTPIqF0FQrttBdL3mq7rCm/IJy0xDFh7 michburk@redhat.com manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:01:58 np0005574782.novalocal python3[6031]: ansible-authorized_key Invoked with user=zuul state=present key=ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAICy6GpGEtwevXEEn4mmLR5lmSLe23dGgAvzkB9DMNbkf rsafrono@rsafrono manage_dir=True exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:02:00 np0005574782.novalocal sudo[6055]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evgvsuvkixqobzgnfqguxydwnfxxwgob ; /usr/bin/python3'
Jan 05 20:02:00 np0005574782.novalocal sudo[6055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:02:01 np0005574782.novalocal python3[6057]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 05 20:02:01 np0005574782.novalocal systemd[1]: Starting Time & Date Service...
Jan 05 20:02:01 np0005574782.novalocal systemd[1]: Started Time & Date Service.
Jan 05 20:02:01 np0005574782.novalocal systemd-timedated[6059]: Changed time zone to 'UTC' (UTC).
Jan 05 20:02:01 np0005574782.novalocal sudo[6055]: pam_unix(sudo:session): session closed for user root
Jan 05 20:02:02 np0005574782.novalocal sudo[6086]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ioxkgxfygjrqitwcbenqnvjbmsosqtib ; /usr/bin/python3'
Jan 05 20:02:02 np0005574782.novalocal sudo[6086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:02:02 np0005574782.novalocal python3[6088]: ansible-file Invoked with path=/etc/nodepool state=directory mode=511 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:02:02 np0005574782.novalocal sudo[6086]: pam_unix(sudo:session): session closed for user root
Jan 05 20:02:03 np0005574782.novalocal python3[6164]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:02:03 np0005574782.novalocal python3[6235]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes src=/home/zuul/.ansible/tmp/ansible-tmp-1767643323.1604142-153-5270159827783/source _original_basename=tmp7e2qvhjn follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:02:04 np0005574782.novalocal python3[6335]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/sub_nodes_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:02:04 np0005574782.novalocal python3[6406]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/sub_nodes_private src=/home/zuul/.ansible/tmp/ansible-tmp-1767643324.0292668-183-240239245162466/source _original_basename=tmpyv_4bfsz follow=False checksum=da39a3ee5e6b4b0d3255bfef95601890afd80709 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:02:05 np0005574782.novalocal sudo[6506]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykoqmluiuoyvlhumfbhqwnfbpvusgqrw ; /usr/bin/python3'
Jan 05 20:02:05 np0005574782.novalocal sudo[6506]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:02:05 np0005574782.novalocal python3[6508]: ansible-ansible.legacy.stat Invoked with path=/etc/nodepool/node_private follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:02:05 np0005574782.novalocal sudo[6506]: pam_unix(sudo:session): session closed for user root
Jan 05 20:02:05 np0005574782.novalocal sudo[6579]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqsegxgugjgamfzigdkjiqgbbjrjbruk ; /usr/bin/python3'
Jan 05 20:02:05 np0005574782.novalocal sudo[6579]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:02:05 np0005574782.novalocal python3[6581]: ansible-ansible.legacy.copy Invoked with dest=/etc/nodepool/node_private src=/home/zuul/.ansible/tmp/ansible-tmp-1767643325.0954523-231-122717875160097/source _original_basename=tmppz2ncjv3 follow=False checksum=4533a6af5c84c28dd874a752186ca59a7a5dd951 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:02:05 np0005574782.novalocal sudo[6579]: pam_unix(sudo:session): session closed for user root
Jan 05 20:02:06 np0005574782.novalocal python3[6629]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa /etc/nodepool/id_rsa zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:02:06 np0005574782.novalocal python3[6655]: ansible-ansible.legacy.command Invoked with _raw_params=cp .ssh/id_rsa.pub /etc/nodepool/id_rsa.pub zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:02:06 np0005574782.novalocal sudo[6733]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ndykvcgccptijiuzrigfdggpptynrwoo ; /usr/bin/python3'
Jan 05 20:02:06 np0005574782.novalocal sudo[6733]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:02:07 np0005574782.novalocal python3[6735]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/zuul-sudo-grep follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:02:07 np0005574782.novalocal sudo[6733]: pam_unix(sudo:session): session closed for user root
Jan 05 20:02:07 np0005574782.novalocal sudo[6806]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-liqouqgbjjplxhyfhbhpcfcbygdwtduh ; /usr/bin/python3'
Jan 05 20:02:07 np0005574782.novalocal sudo[6806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:02:07 np0005574782.novalocal python3[6808]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/zuul-sudo-grep mode=288 src=/home/zuul/.ansible/tmp/ansible-tmp-1767643326.7077072-273-142090767415503/source _original_basename=tmpbbav00au follow=False checksum=bdca1a77493d00fb51567671791f4aa30f66c2f0 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:02:07 np0005574782.novalocal sudo[6806]: pam_unix(sudo:session): session closed for user root
Jan 05 20:02:07 np0005574782.novalocal sudo[6857]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpoyydhbzvgryczgdjxwcoxltakfjabb ; /usr/bin/python3'
Jan 05 20:02:07 np0005574782.novalocal sudo[6857]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:02:07 np0005574782.novalocal python3[6859]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/visudo -c zuul_log_id=fa163efc-24cc-cf70-4556-00000000001d-1-compute0 zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:02:07 np0005574782.novalocal sudo[6857]: pam_unix(sudo:session): session closed for user root
Jan 05 20:02:08 np0005574782.novalocal python3[6887]: ansible-ansible.legacy.command Invoked with executable=/bin/bash _raw_params=env
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-cf70-4556-00000000001e-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None creates=None removes=None stdin=None
Jan 05 20:02:09 np0005574782.novalocal python3[6916]: ansible-file Invoked with path=/home/zuul/workspace state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:02:27 np0005574782.novalocal sudo[6940]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rywzhbmpqxabutjdqlsfirehycxkaryv ; /usr/bin/python3'
Jan 05 20:02:27 np0005574782.novalocal sudo[6940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:02:27 np0005574782.novalocal python3[6942]: ansible-ansible.builtin.file Invoked with path=/etc/ci/env state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:02:27 np0005574782.novalocal sudo[6940]: pam_unix(sudo:session): session closed for user root
Jan 05 20:02:31 np0005574782.novalocal systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 05 20:03:06 np0005574782.novalocal kernel: pci 0000:00:07.0: [1af4:1000] type 00 class 0x020000 conventional PCI endpoint
Jan 05 20:03:06 np0005574782.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x0000-0x003f]
Jan 05 20:03:06 np0005574782.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0x00000000-0x00000fff]
Jan 05 20:03:06 np0005574782.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x00000000-0x00003fff 64bit pref]
Jan 05 20:03:06 np0005574782.novalocal kernel: pci 0000:00:07.0: ROM [mem 0x00000000-0x0007ffff pref]
Jan 05 20:03:06 np0005574782.novalocal kernel: pci 0000:00:07.0: ROM [mem 0xc0000000-0xc007ffff pref]: assigned
Jan 05 20:03:06 np0005574782.novalocal kernel: pci 0000:00:07.0: BAR 4 [mem 0x240000000-0x240003fff 64bit pref]: assigned
Jan 05 20:03:06 np0005574782.novalocal kernel: pci 0000:00:07.0: BAR 1 [mem 0xc0080000-0xc0080fff]: assigned
Jan 05 20:03:06 np0005574782.novalocal kernel: pci 0000:00:07.0: BAR 0 [io  0x1000-0x103f]: assigned
Jan 05 20:03:06 np0005574782.novalocal kernel: virtio-pci 0000:00:07.0: enabling device (0000 -> 0003)
Jan 05 20:03:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643386.8307] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 05 20:03:06 np0005574782.novalocal systemd-udevd[6945]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 20:03:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643386.8609] device (eth1): state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 20:03:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643386.8633] settings: (eth1): created default wired connection 'Wired connection 1'
Jan 05 20:03:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643386.8635] device (eth1): carrier: link connected
Jan 05 20:03:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643386.8637] device (eth1): state change: unavailable -> disconnected (reason 'carrier-changed', managed-type: 'full')
Jan 05 20:03:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643386.8641] policy: auto-activating connection 'Wired connection 1' (4f272fd3-0f0d-3c27-be08-0346479a4132)
Jan 05 20:03:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643386.8644] device (eth1): Activation: starting connection 'Wired connection 1' (4f272fd3-0f0d-3c27-be08-0346479a4132)
Jan 05 20:03:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643386.8645] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 20:03:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643386.8646] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 20:03:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643386.8649] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 20:03:06 np0005574782.novalocal NetworkManager[858]: <info>  [1767643386.8652] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 05 20:03:07 np0005574782.novalocal python3[6972]: ansible-ansible.legacy.command Invoked with _raw_params=ip -j link zuul_log_id=fa163efc-24cc-3f84-3f58-0000000000fc-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:03:14 np0005574782.novalocal sudo[7051]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfttozjyqshceuzjgqeyhgsklbuhxhkh ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 05 20:03:14 np0005574782.novalocal sudo[7051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:03:14 np0005574782.novalocal python3[7053]: ansible-ansible.legacy.stat Invoked with path=/etc/NetworkManager/system-connections/ci-private-network.nmconnection follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:03:14 np0005574782.novalocal sudo[7051]: pam_unix(sudo:session): session closed for user root
Jan 05 20:03:15 np0005574782.novalocal sudo[7124]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cyyoeuptnizhsmeobnlffqefbvvgidym ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 05 20:03:15 np0005574782.novalocal sudo[7124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:03:15 np0005574782.novalocal python3[7126]: ansible-ansible.legacy.copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767643394.4824553-102-47327389073502/source dest=/etc/NetworkManager/system-connections/ci-private-network.nmconnection mode=0600 owner=root group=root follow=False _original_basename=bootstrap-ci-network-nm-connection.nmconnection.j2 checksum=efe191e05a5d9c3a8ea98857896bd858c9a3f9ca backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:03:15 np0005574782.novalocal sudo[7124]: pam_unix(sudo:session): session closed for user root
Jan 05 20:03:16 np0005574782.novalocal sudo[7174]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhwlvaofplmmqsxjluhbinmbtxbavohg ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 05 20:03:16 np0005574782.novalocal sudo[7174]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:03:16 np0005574782.novalocal python3[7176]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:03:16 np0005574782.novalocal systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 05 20:03:16 np0005574782.novalocal systemd[1]: Stopped Network Manager Wait Online.
Jan 05 20:03:16 np0005574782.novalocal systemd[1]: Stopping Network Manager Wait Online...
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[858]: <info>  [1767643396.4779] caught SIGTERM, shutting down normally.
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[858]: <info>  [1767643396.4790] dhcp4 (eth0): canceled DHCP transaction
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[858]: <info>  [1767643396.4790] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 05 20:03:16 np0005574782.novalocal systemd[1]: Stopping Network Manager...
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[858]: <info>  [1767643396.4790] dhcp4 (eth0): state changed no lease
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[858]: <info>  [1767643396.4794] manager: NetworkManager state is now CONNECTING
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[858]: <info>  [1767643396.4893] dhcp4 (eth1): canceled DHCP transaction
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[858]: <info>  [1767643396.4893] dhcp4 (eth1): state changed no lease
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[858]: <info>  [1767643396.4963] exiting (success)
Jan 05 20:03:16 np0005574782.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 05 20:03:16 np0005574782.novalocal systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 05 20:03:16 np0005574782.novalocal systemd[1]: Stopped Network Manager.
Jan 05 20:03:16 np0005574782.novalocal systemd[1]: NetworkManager.service: Consumed 1.319s CPU time, 10.2M memory peak.
Jan 05 20:03:16 np0005574782.novalocal systemd[1]: Starting Network Manager...
Jan 05 20:03:16 np0005574782.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.5565] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:a742f362-63b2-484d-bd96-34f7a12572fa)
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.5569] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.5657] manager[0x556202758000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 05 20:03:16 np0005574782.novalocal systemd[1]: Starting Hostname Service...
Jan 05 20:03:16 np0005574782.novalocal systemd[1]: Started Hostname Service.
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6746] hostname: hostname: using hostnamed
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6750] hostname: static hostname changed from (none) to "np0005574782.novalocal"
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6759] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6767] manager[0x556202758000]: rfkill: Wi-Fi hardware radio set enabled
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6767] manager[0x556202758000]: rfkill: WWAN hardware radio set enabled
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6824] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6825] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6826] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6827] manager: Networking is enabled by state file
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6832] settings: Loaded settings plugin: keyfile (internal)
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6838] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6878] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6894] dhcp: init: Using DHCP client 'internal'
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6898] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6906] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6915] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6926] device (lo): Activation: starting connection 'lo' (13386405-8334-4b8c-b612-8be49be697c2)
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6936] device (eth0): carrier: link connected
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6943] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6950] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6951] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6961] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6971] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6983] device (eth1): carrier: link connected
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6989] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6998] manager: (eth1): assume: will attempt to assume matching connection 'Wired connection 1' (4f272fd3-0f0d-3c27-be08-0346479a4132) (indicated)
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.6998] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7008] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7019] device (eth1): Activation: starting connection 'Wired connection 1' (4f272fd3-0f0d-3c27-be08-0346479a4132)
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7028] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 05 20:03:16 np0005574782.novalocal systemd[1]: Started Network Manager.
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7036] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7040] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7044] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7049] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7056] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7060] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7065] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7070] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7082] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7087] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7104] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7108] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7128] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7132] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 05 20:03:16 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643396.7137] device (lo): Activation: successful, device activated.
Jan 05 20:03:16 np0005574782.novalocal systemd[1]: Starting Network Manager Wait Online...
Jan 05 20:03:16 np0005574782.novalocal sudo[7174]: pam_unix(sudo:session): session closed for user root
Jan 05 20:03:17 np0005574782.novalocal python3[7241]: ansible-ansible.legacy.command Invoked with _raw_params=ip route zuul_log_id=fa163efc-24cc-3f84-3f58-0000000000a7-0-controller zuul_ansible_split_streams=False _uses_shell=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:03:17 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643397.7565] dhcp4 (eth0): state changed new lease, address=38.102.83.179
Jan 05 20:03:17 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643397.7575] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 05 20:03:18 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643398.2905] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 05 20:03:18 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643398.2956] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 05 20:03:18 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643398.2961] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 05 20:03:18 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643398.2971] manager: NetworkManager state is now CONNECTED_SITE
Jan 05 20:03:18 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643398.2984] device (eth0): Activation: successful, device activated.
Jan 05 20:03:18 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643398.2996] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 05 20:03:28 np0005574782.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 05 20:03:46 np0005574782.novalocal systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.2380] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 05 20:04:02 np0005574782.novalocal systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 05 20:04:02 np0005574782.novalocal systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.2695] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.2698] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.2706] device (eth1): Activation: successful, device activated.
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.2714] manager: startup complete
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.2717] device (eth1): state change: activated -> failed (reason 'ip-config-unavailable', managed-type: 'full')
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <warn>  [1767643442.2723] device (eth1): Activation: failed for connection 'Wired connection 1'
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.2733] device (eth1): state change: failed -> disconnected (reason 'none', managed-type: 'full')
Jan 05 20:04:02 np0005574782.novalocal systemd[1]: Finished Network Manager Wait Online.
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.2823] dhcp4 (eth1): canceled DHCP transaction
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.2823] dhcp4 (eth1): activation: beginning transaction (timeout in 45 seconds)
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.2823] dhcp4 (eth1): state changed no lease
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.2842] policy: auto-activating connection 'ci-private-network' (b2147c2e-bb86-524a-bb40-29a4bf6eda54)
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.2850] device (eth1): Activation: starting connection 'ci-private-network' (b2147c2e-bb86-524a-bb40-29a4bf6eda54)
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.2853] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.2858] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.2867] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.2881] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.3003] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.3007] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 20:04:02 np0005574782.novalocal NetworkManager[7183]: <info>  [1767643442.3016] device (eth1): Activation: successful, device activated.
Jan 05 20:04:12 np0005574782.novalocal systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 05 20:04:15 np0005574782.novalocal sudo[7363]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzkageeyumegoximmoqvyjpimrqjxgfd ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 05 20:04:15 np0005574782.novalocal sudo[7363]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:04:15 np0005574782.novalocal python3[7365]: ansible-ansible.legacy.stat Invoked with path=/etc/ci/env/networking-info.yml follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:04:15 np0005574782.novalocal sudo[7363]: pam_unix(sudo:session): session closed for user root
Jan 05 20:04:15 np0005574782.novalocal sudo[7436]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vlqthqnohgivndhbzrjxalmqvbvsvfuj ; OS_CLOUD=vexxhost /usr/bin/python3'
Jan 05 20:04:15 np0005574782.novalocal sudo[7436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:04:16 np0005574782.novalocal python3[7438]: ansible-ansible.legacy.copy Invoked with dest=/etc/ci/env/networking-info.yml owner=root group=root mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767643455.4371357-259-204312978855280/source _original_basename=tmpgswv0z0l follow=False checksum=9aaf6cb1fcb86528d0c9d4561e139af22e6da627 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:04:16 np0005574782.novalocal sudo[7436]: pam_unix(sudo:session): session closed for user root
Jan 05 20:04:21 np0005574782.novalocal systemd[4300]: Starting Mark boot as successful...
Jan 05 20:04:21 np0005574782.novalocal systemd[4300]: Finished Mark boot as successful.
Jan 05 20:05:16 np0005574782.novalocal sshd-session[4309]: Received disconnect from 38.102.83.114 port 49638:11: disconnected by user
Jan 05 20:05:16 np0005574782.novalocal sshd-session[4309]: Disconnected from user zuul 38.102.83.114 port 49638
Jan 05 20:05:16 np0005574782.novalocal sshd-session[4296]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:05:16 np0005574782.novalocal systemd-logind[788]: Session 1 logged out. Waiting for processes to exit.
Jan 05 20:06:54 np0005574782.novalocal sshd-session[7465]: Connection closed by 20.65.193.234 port 54042
Jan 05 20:06:54 np0005574782.novalocal sshd-session[7466]: banner exchange: Connection from 20.65.193.234 port 42166: invalid format
Jan 05 20:07:21 np0005574782.novalocal systemd[4300]: Created slice User Background Tasks Slice.
Jan 05 20:07:21 np0005574782.novalocal systemd[4300]: Starting Cleanup of User's Temporary Files and Directories...
Jan 05 20:07:21 np0005574782.novalocal systemd[4300]: Finished Cleanup of User's Temporary Files and Directories.
Jan 05 20:10:01 np0005574782.novalocal sshd-session[7470]: Accepted publickey for zuul from 38.102.83.114 port 46354 ssh2: RSA SHA256:mXJcJI31MVGiY6AzcXJ/p7r5TKU3Hv0WPE1JL6YqbII
Jan 05 20:10:01 np0005574782.novalocal systemd-logind[788]: New session 3 of user zuul.
Jan 05 20:10:01 np0005574782.novalocal systemd[1]: Started Session 3 of User zuul.
Jan 05 20:10:01 np0005574782.novalocal sshd-session[7470]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:10:01 np0005574782.novalocal sudo[7497]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywfcpfhczdjlhmduffwxkpczycczakfm ; /usr/bin/python3'
Jan 05 20:10:01 np0005574782.novalocal sudo[7497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:10:02 np0005574782.novalocal python3[7499]: ansible-ansible.legacy.command Invoked with _raw_params=lsblk -nd -o MAJ:MIN /dev/vda
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-147c-6999-000000002179-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:10:02 np0005574782.novalocal sudo[7497]: pam_unix(sudo:session): session closed for user root
Jan 05 20:10:02 np0005574782.novalocal sudo[7526]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-crchziylcpihdryqweogouieufhhdrls ; /usr/bin/python3'
Jan 05 20:10:02 np0005574782.novalocal sudo[7526]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:10:02 np0005574782.novalocal python3[7528]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/init.scope state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:10:02 np0005574782.novalocal sudo[7526]: pam_unix(sudo:session): session closed for user root
Jan 05 20:10:02 np0005574782.novalocal sudo[7552]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvrpcgunaomwvhblgoeudkskhldwsicl ; /usr/bin/python3'
Jan 05 20:10:02 np0005574782.novalocal sudo[7552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:10:02 np0005574782.novalocal python3[7554]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/machine.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:10:02 np0005574782.novalocal sudo[7552]: pam_unix(sudo:session): session closed for user root
Jan 05 20:10:02 np0005574782.novalocal sudo[7578]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbvazcownmkyyrdsowvxelkdgtwweugl ; /usr/bin/python3'
Jan 05 20:10:02 np0005574782.novalocal sudo[7578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:10:02 np0005574782.novalocal python3[7580]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/system.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:10:02 np0005574782.novalocal sudo[7578]: pam_unix(sudo:session): session closed for user root
Jan 05 20:10:02 np0005574782.novalocal sudo[7604]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejecueznwqispcwrgvacbckvheqpmojg ; /usr/bin/python3'
Jan 05 20:10:02 np0005574782.novalocal sudo[7604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:10:03 np0005574782.novalocal python3[7606]: ansible-ansible.builtin.file Invoked with path=/sys/fs/cgroup/user.slice state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:10:03 np0005574782.novalocal sudo[7604]: pam_unix(sudo:session): session closed for user root
Jan 05 20:10:03 np0005574782.novalocal sudo[7630]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdzwmsbntcgqoagsltpdpltswtdpsdno ; /usr/bin/python3'
Jan 05 20:10:03 np0005574782.novalocal sudo[7630]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:10:03 np0005574782.novalocal python3[7632]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system.conf.d state=directory mode=0755 recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:10:03 np0005574782.novalocal sudo[7630]: pam_unix(sudo:session): session closed for user root
Jan 05 20:10:04 np0005574782.novalocal sudo[7708]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnzjluvdgoquthitlcfvpvivcxlrbzix ; /usr/bin/python3'
Jan 05 20:10:04 np0005574782.novalocal sudo[7708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:10:04 np0005574782.novalocal python3[7710]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system.conf.d/override.conf follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:10:04 np0005574782.novalocal sudo[7708]: pam_unix(sudo:session): session closed for user root
Jan 05 20:10:04 np0005574782.novalocal sudo[7781]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skbkaumpzhoguysmqxveoisbaidzmzqc ; /usr/bin/python3'
Jan 05 20:10:04 np0005574782.novalocal sudo[7781]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:10:04 np0005574782.novalocal python3[7783]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system.conf.d/override.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767643803.9860892-507-243297225765092/source _original_basename=tmpsearibuq follow=False checksum=a05098bd3d2321238ea1169d0e6f135b35b392d4 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:10:04 np0005574782.novalocal sudo[7781]: pam_unix(sudo:session): session closed for user root
Jan 05 20:10:05 np0005574782.novalocal sudo[7831]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nfzupihkuworyopuiivjtfqgehemsljq ; /usr/bin/python3'
Jan 05 20:10:05 np0005574782.novalocal sudo[7831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:10:05 np0005574782.novalocal python3[7833]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 20:10:05 np0005574782.novalocal systemd[1]: Reloading.
Jan 05 20:10:05 np0005574782.novalocal systemd-rc-local-generator[7855]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:10:06 np0005574782.novalocal sudo[7831]: pam_unix(sudo:session): session closed for user root
Jan 05 20:10:07 np0005574782.novalocal sudo[7886]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zrnywjcyhngjgpyzbybjpvwmqsvdyluc ; /usr/bin/python3'
Jan 05 20:10:07 np0005574782.novalocal sudo[7886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:10:07 np0005574782.novalocal python3[7888]: ansible-ansible.builtin.wait_for Invoked with path=/sys/fs/cgroup/system.slice/io.max state=present timeout=30 host=127.0.0.1 connect_timeout=5 delay=0 active_connection_states=['ESTABLISHED', 'FIN_WAIT1', 'FIN_WAIT2', 'SYN_RECV', 'SYN_SENT', 'TIME_WAIT'] sleep=1 port=None search_regex=None exclude_hosts=None msg=None
Jan 05 20:10:07 np0005574782.novalocal sudo[7886]: pam_unix(sudo:session): session closed for user root
Jan 05 20:10:07 np0005574782.novalocal sudo[7912]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atigppipqezdtecboacouujgoltoncsb ; /usr/bin/python3'
Jan 05 20:10:07 np0005574782.novalocal sudo[7912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:10:08 np0005574782.novalocal python3[7914]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/init.scope/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:10:08 np0005574782.novalocal sudo[7912]: pam_unix(sudo:session): session closed for user root
Jan 05 20:10:08 np0005574782.novalocal sudo[7940]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjhohbetawyxjrijybvemrqocgmsrwlw ; /usr/bin/python3'
Jan 05 20:10:08 np0005574782.novalocal sudo[7940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:10:08 np0005574782.novalocal python3[7942]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/machine.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:10:09 np0005574782.novalocal sudo[7940]: pam_unix(sudo:session): session closed for user root
Jan 05 20:10:09 np0005574782.novalocal sudo[7968]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqpikkhyddrljzuvmnherkdxbvovgzbd ; /usr/bin/python3'
Jan 05 20:10:09 np0005574782.novalocal sudo[7968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:10:09 np0005574782.novalocal python3[7970]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/system.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:10:09 np0005574782.novalocal sudo[7968]: pam_unix(sudo:session): session closed for user root
Jan 05 20:10:09 np0005574782.novalocal sudo[7996]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnimimzyfggcdezltynugmavzprqkemp ; /usr/bin/python3'
Jan 05 20:10:09 np0005574782.novalocal sudo[7996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:10:09 np0005574782.novalocal python3[7998]: ansible-ansible.legacy.command Invoked with _raw_params=echo "252:0   riops=18000 wiops=18000 rbps=262144000 wbps=262144000" > /sys/fs/cgroup/user.slice/io.max
                                                       _uses_shell=True zuul_log_id=in-loop-ignore zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:10:09 np0005574782.novalocal sudo[7996]: pam_unix(sudo:session): session closed for user root
Jan 05 20:10:10 np0005574782.novalocal python3[8025]: ansible-ansible.legacy.command Invoked with _raw_params=echo "init";    cat /sys/fs/cgroup/init.scope/io.max; echo "machine"; cat /sys/fs/cgroup/machine.slice/io.max; echo "system";  cat /sys/fs/cgroup/system.slice/io.max; echo "user";    cat /sys/fs/cgroup/user.slice/io.max;
                                                       _uses_shell=True zuul_log_id=fa163efc-24cc-147c-6999-000000002180-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:10:11 np0005574782.novalocal python3[8055]: ansible-ansible.builtin.stat Invoked with path=/sys/fs/cgroup/kubepods.slice/io.max follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 05 20:10:13 np0005574782.novalocal sshd-session[7473]: Connection closed by 38.102.83.114 port 46354
Jan 05 20:10:13 np0005574782.novalocal sshd-session[7470]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:10:13 np0005574782.novalocal systemd[1]: session-3.scope: Deactivated successfully.
Jan 05 20:10:13 np0005574782.novalocal systemd[1]: session-3.scope: Consumed 4.599s CPU time.
Jan 05 20:10:13 np0005574782.novalocal systemd-logind[788]: Session 3 logged out. Waiting for processes to exit.
Jan 05 20:10:13 np0005574782.novalocal systemd-logind[788]: Removed session 3.
Jan 05 20:10:14 np0005574782.novalocal sshd-session[8062]: Accepted publickey for zuul from 38.102.83.114 port 51844 ssh2: RSA SHA256:mXJcJI31MVGiY6AzcXJ/p7r5TKU3Hv0WPE1JL6YqbII
Jan 05 20:10:14 np0005574782.novalocal systemd-logind[788]: New session 4 of user zuul.
Jan 05 20:10:14 np0005574782.novalocal systemd[1]: Started Session 4 of User zuul.
Jan 05 20:10:14 np0005574782.novalocal sshd-session[8062]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:10:15 np0005574782.novalocal sudo[8089]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vrzwmrkbdhyxtstcbpuaqhkngcfxntut ; /usr/bin/python3'
Jan 05 20:10:15 np0005574782.novalocal sudo[8089]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:10:15 np0005574782.novalocal python3[8091]: ansible-ansible.legacy.dnf Invoked with name=['podman', 'buildah'] state=present allow_downgrade=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_repoquery=True install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 allowerasing=False nobest=False use_backend=auto conf_file=None disable_excludes=None download_dir=None list=None releasever=None
Jan 05 20:10:16 np0005574782.novalocal irqbalance[782]: Cannot change IRQ 27 affinity: Operation not permitted
Jan 05 20:10:16 np0005574782.novalocal irqbalance[782]: IRQ 27 affinity is now unmanaged
Jan 05 20:11:17 np0005574782.novalocal setsebool[8409]: The virt_use_nfs policy boolean was changed to 1 by root
Jan 05 20:11:17 np0005574782.novalocal setsebool[8409]: The virt_sandbox_use_all_caps policy boolean was changed to 1 by root
Jan 05 20:11:29 np0005574782.novalocal kernel: SELinux:  Converting 385 SID table entries...
Jan 05 20:11:29 np0005574782.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 20:11:29 np0005574782.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 05 20:11:29 np0005574782.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 20:11:29 np0005574782.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 05 20:11:29 np0005574782.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 20:11:29 np0005574782.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 20:11:29 np0005574782.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 20:11:39 np0005574782.novalocal kernel: SELinux:  Converting 388 SID table entries...
Jan 05 20:11:39 np0005574782.novalocal kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 20:11:39 np0005574782.novalocal kernel: SELinux:  policy capability open_perms=1
Jan 05 20:11:39 np0005574782.novalocal kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 20:11:39 np0005574782.novalocal kernel: SELinux:  policy capability always_check_network=0
Jan 05 20:11:39 np0005574782.novalocal kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 20:11:39 np0005574782.novalocal kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 20:11:39 np0005574782.novalocal kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 20:11:57 np0005574782.novalocal dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 05 20:11:57 np0005574782.novalocal systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 20:11:57 np0005574782.novalocal systemd[1]: Starting man-db-cache-update.service...
Jan 05 20:11:57 np0005574782.novalocal systemd[1]: Reloading.
Jan 05 20:11:57 np0005574782.novalocal systemd-rc-local-generator[9183]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:11:58 np0005574782.novalocal systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 20:11:59 np0005574782.novalocal sudo[8089]: pam_unix(sudo:session): session closed for user root
Jan 05 20:12:01 np0005574782.novalocal python3[11877]: ansible-ansible.legacy.command Invoked with _raw_params=echo "openstack-k8s-operators+cirobot"
                                                        _uses_shell=True zuul_log_id=fa163efc-24cc-9f4d-4389-00000000000a-1-compute0 zuul_ansible_split_streams=False warn=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:12:02 np0005574782.novalocal kernel: evm: overlay not supported
Jan 05 20:12:02 np0005574782.novalocal systemd[4300]: Starting D-Bus User Message Bus...
Jan 05 20:12:02 np0005574782.novalocal dbus-broker-launch[12633]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +31: Eavesdropping is deprecated and ignored
Jan 05 20:12:02 np0005574782.novalocal dbus-broker-launch[12633]: Policy to allow eavesdropping in /usr/share/dbus-1/session.conf +33: Eavesdropping is deprecated and ignored
Jan 05 20:12:02 np0005574782.novalocal systemd[4300]: Started D-Bus User Message Bus.
Jan 05 20:12:02 np0005574782.novalocal dbus-broker-lau[12633]: Ready
Jan 05 20:12:02 np0005574782.novalocal systemd[4300]: selinux: avc:  op=load_policy lsm=selinux seqno=4 res=1
Jan 05 20:12:02 np0005574782.novalocal systemd[4300]: Created slice Slice /user.
Jan 05 20:12:02 np0005574782.novalocal systemd[4300]: podman-12501.scope: unit configures an IP firewall, but not running as root.
Jan 05 20:12:02 np0005574782.novalocal systemd[4300]: (This warning is only shown for the first unit using IP firewalling.)
Jan 05 20:12:02 np0005574782.novalocal systemd[4300]: Started podman-12501.scope.
Jan 05 20:12:03 np0005574782.novalocal systemd[4300]: Started podman-pause-a4a93089.scope.
Jan 05 20:12:03 np0005574782.novalocal sudo[13219]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cggpimoigtgarmwgmrvthplpjkmeqtuu ; /usr/bin/python3'
Jan 05 20:12:03 np0005574782.novalocal sudo[13219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:12:03 np0005574782.novalocal python3[13237]: ansible-ansible.builtin.blockinfile Invoked with state=present insertafter=EOF dest=/etc/containers/registries.conf content=[[registry]]
                                                       location = "38.102.83.107:5001"
                                                       insecure = true path=/etc/containers/registries.conf block=[[registry]]
                                                       location = "38.102.83.107:5001"
                                                       insecure = true marker=# {mark} ANSIBLE MANAGED BLOCK create=False backup=False marker_begin=BEGIN marker_end=END unsafe_writes=False insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:12:03 np0005574782.novalocal python3[13237]: ansible-ansible.builtin.blockinfile [WARNING] Module remote_tmp /root/.ansible/tmp did not exist and was created with a mode of 0700, this may cause issues when running as another user. To avoid this, create the remote_tmp dir with the correct permissions manually
Jan 05 20:12:03 np0005574782.novalocal sudo[13219]: pam_unix(sudo:session): session closed for user root
Jan 05 20:12:04 np0005574782.novalocal sshd-session[8065]: Connection closed by 38.102.83.114 port 51844
Jan 05 20:12:04 np0005574782.novalocal sshd-session[8062]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:12:04 np0005574782.novalocal systemd[1]: session-4.scope: Deactivated successfully.
Jan 05 20:12:04 np0005574782.novalocal systemd[1]: session-4.scope: Consumed 59.491s CPU time.
Jan 05 20:12:04 np0005574782.novalocal systemd-logind[788]: Session 4 logged out. Waiting for processes to exit.
Jan 05 20:12:04 np0005574782.novalocal systemd-logind[788]: Removed session 4.
Jan 05 20:12:10 np0005574782.novalocal sshd-session[15813]: Invalid user user from 78.128.112.74 port 53900
Jan 05 20:12:10 np0005574782.novalocal sshd-session[15813]: Connection closed by invalid user user 78.128.112.74 port 53900 [preauth]
Jan 05 20:12:32 np0005574782.novalocal sshd-session[23241]: Connection closed by 38.102.83.164 port 35554 [preauth]
Jan 05 20:12:32 np0005574782.novalocal sshd-session[23243]: Connection closed by 38.102.83.164 port 35570 [preauth]
Jan 05 20:12:32 np0005574782.novalocal sshd-session[23247]: Unable to negotiate with 38.102.83.164 port 35586: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 05 20:12:32 np0005574782.novalocal sshd-session[23244]: Unable to negotiate with 38.102.83.164 port 35602: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 05 20:12:32 np0005574782.novalocal sshd-session[23250]: Unable to negotiate with 38.102.83.164 port 35606: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 05 20:12:36 np0005574782.novalocal sshd-session[24616]: Accepted publickey for zuul from 38.102.83.114 port 34514 ssh2: RSA SHA256:mXJcJI31MVGiY6AzcXJ/p7r5TKU3Hv0WPE1JL6YqbII
Jan 05 20:12:36 np0005574782.novalocal systemd-logind[788]: New session 5 of user zuul.
Jan 05 20:12:36 np0005574782.novalocal systemd[1]: Started Session 5 of User zuul.
Jan 05 20:12:36 np0005574782.novalocal sshd-session[24616]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:12:36 np0005574782.novalocal python3[24719]: ansible-ansible.posix.authorized_key Invoked with user=zuul key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAoOM8mofUizkcE288s3dwwk94zO8/5Ea/9qNzl3njznXK6oL471d4kFkyCkhO/4O4fjvZ71KOZ2gMR2pra+DyE= zuul@np0005574781.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:12:37 np0005574782.novalocal sudo[24856]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugamehohgizqdxbhdsuylvitllvgvday ; /usr/bin/python3'
Jan 05 20:12:37 np0005574782.novalocal sudo[24856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:12:37 np0005574782.novalocal python3[24866]: ansible-ansible.posix.authorized_key Invoked with user=root key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAoOM8mofUizkcE288s3dwwk94zO8/5Ea/9qNzl3njznXK6oL471d4kFkyCkhO/4O4fjvZ71KOZ2gMR2pra+DyE= zuul@np0005574781.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:12:37 np0005574782.novalocal sudo[24856]: pam_unix(sudo:session): session closed for user root
Jan 05 20:12:38 np0005574782.novalocal sudo[25217]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-echcehejswhgebwosktdeiblzlqiucmu ; /usr/bin/python3'
Jan 05 20:12:38 np0005574782.novalocal sudo[25217]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:12:38 np0005574782.novalocal python3[25226]: ansible-ansible.builtin.user Invoked with name=cloud-admin shell=/bin/bash state=present non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on np0005574782.novalocal update_password=always uid=None group=None groups=None comment=None home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None
Jan 05 20:12:38 np0005574782.novalocal useradd[25291]: new group: name=cloud-admin, GID=1002
Jan 05 20:12:38 np0005574782.novalocal useradd[25291]: new user: name=cloud-admin, UID=1002, GID=1002, home=/home/cloud-admin, shell=/bin/bash, from=none
Jan 05 20:12:38 np0005574782.novalocal sudo[25217]: pam_unix(sudo:session): session closed for user root
Jan 05 20:12:38 np0005574782.novalocal sudo[25415]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ntefkxvxelctzbbklpgmhfvqrgqunhqq ; /usr/bin/python3'
Jan 05 20:12:38 np0005574782.novalocal sudo[25415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:12:38 np0005574782.novalocal python3[25422]: ansible-ansible.posix.authorized_key Invoked with user=cloud-admin key=ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBAoOM8mofUizkcE288s3dwwk94zO8/5Ea/9qNzl3njznXK6oL471d4kFkyCkhO/4O4fjvZ71KOZ2gMR2pra+DyE= zuul@np0005574781.novalocal
                                                        manage_dir=True state=present exclusive=False validate_certs=True follow=False path=None key_options=None comment=None
Jan 05 20:12:38 np0005574782.novalocal sudo[25415]: pam_unix(sudo:session): session closed for user root
Jan 05 20:12:38 np0005574782.novalocal sudo[25657]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-caanbubutjyicydfcwffptnufbjcbnib ; /usr/bin/python3'
Jan 05 20:12:38 np0005574782.novalocal sudo[25657]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:12:39 np0005574782.novalocal python3[25672]: ansible-ansible.legacy.stat Invoked with path=/etc/sudoers.d/cloud-admin follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:12:39 np0005574782.novalocal sudo[25657]: pam_unix(sudo:session): session closed for user root
Jan 05 20:12:39 np0005574782.novalocal sudo[25939]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnmwhwxzngrigpnruxtbcncoplmaomyr ; /usr/bin/python3'
Jan 05 20:12:39 np0005574782.novalocal sudo[25939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:12:39 np0005574782.novalocal python3[25949]: ansible-ansible.legacy.copy Invoked with dest=/etc/sudoers.d/cloud-admin mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1767643958.8514895-135-202514717217740/source _original_basename=tmpf6gz78xe follow=False checksum=e7614e5ad3ab06eaae55b8efaa2ed81b63ea5634 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:12:39 np0005574782.novalocal sudo[25939]: pam_unix(sudo:session): session closed for user root
Jan 05 20:12:40 np0005574782.novalocal sudo[26165]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utwpcvormfpcgvevfalqorlftlxkqknw ; /usr/bin/python3'
Jan 05 20:12:40 np0005574782.novalocal sudo[26165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:12:40 np0005574782.novalocal python3[26174]: ansible-ansible.builtin.hostname Invoked with name=compute-0 use=systemd
Jan 05 20:12:40 np0005574782.novalocal systemd[1]: Starting Hostname Service...
Jan 05 20:12:40 np0005574782.novalocal systemd[1]: Started Hostname Service.
Jan 05 20:12:40 np0005574782.novalocal systemd-hostnamed[26291]: Changed pretty hostname to 'compute-0'
Jan 05 20:12:40 compute-0 systemd-hostnamed[26291]: Hostname set to <compute-0> (static)
Jan 05 20:12:40 compute-0 NetworkManager[7183]: <info>  [1767643960.6715] hostname: static hostname changed from "np0005574782.novalocal" to "compute-0"
Jan 05 20:12:40 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 05 20:12:40 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 05 20:12:40 compute-0 sudo[26165]: pam_unix(sudo:session): session closed for user root
Jan 05 20:12:40 compute-0 sshd-session[24665]: Connection closed by 38.102.83.114 port 34514
Jan 05 20:12:40 compute-0 sshd-session[24616]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:12:41 compute-0 systemd[1]: session-5.scope: Deactivated successfully.
Jan 05 20:12:41 compute-0 systemd[1]: session-5.scope: Consumed 2.369s CPU time.
Jan 05 20:12:41 compute-0 systemd-logind[788]: Session 5 logged out. Waiting for processes to exit.
Jan 05 20:12:41 compute-0 systemd-logind[788]: Removed session 5.
Jan 05 20:12:50 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 05 20:12:55 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 20:12:55 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 20:12:55 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1min 5.570s CPU time.
Jan 05 20:12:55 compute-0 systemd[1]: run-re7e89bf22cf342f3b0c3ab7f222aa798.service: Deactivated successfully.
Jan 05 20:13:10 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 05 20:16:21 compute-0 systemd[1]: Starting Cleanup of Temporary Directories...
Jan 05 20:16:21 compute-0 systemd[1]: systemd-tmpfiles-clean.service: Deactivated successfully.
Jan 05 20:16:21 compute-0 systemd[1]: Finished Cleanup of Temporary Directories.
Jan 05 20:16:21 compute-0 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dclean.service.mount: Deactivated successfully.
Jan 05 20:17:58 compute-0 sshd-session[30251]: Accepted publickey for zuul from 38.102.83.164 port 38116 ssh2: RSA SHA256:mXJcJI31MVGiY6AzcXJ/p7r5TKU3Hv0WPE1JL6YqbII
Jan 05 20:17:58 compute-0 systemd-logind[788]: New session 6 of user zuul.
Jan 05 20:17:58 compute-0 systemd[1]: Started Session 6 of User zuul.
Jan 05 20:17:58 compute-0 sshd-session[30251]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:17:58 compute-0 python3[30327]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:18:01 compute-0 sudo[30441]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uvruabekxssaniqauatutyknahvigcfx ; /usr/bin/python3'
Jan 05 20:18:01 compute-0 sudo[30441]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:18:01 compute-0 python3[30443]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:18:01 compute-0 sudo[30441]: pam_unix(sudo:session): session closed for user root
Jan 05 20:18:02 compute-0 sudo[30514]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lwacoglykgdleukixhyrpombadvgmkiy ; /usr/bin/python3'
Jan 05 20:18:02 compute-0 sudo[30514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:18:02 compute-0 python3[30516]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767644281.2673984-34025-122459528252298/source mode=0755 _original_basename=delorean.repo follow=False checksum=0f7c85cc67bf467c48edf98d5acc63e62d808324 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:18:02 compute-0 sudo[30514]: pam_unix(sudo:session): session closed for user root
Jan 05 20:18:02 compute-0 sudo[30540]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfbkmivebqezneahvdkcvppugasdcove ; /usr/bin/python3'
Jan 05 20:18:02 compute-0 sudo[30540]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:18:02 compute-0 python3[30542]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean-antelope-testing.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:18:02 compute-0 sudo[30540]: pam_unix(sudo:session): session closed for user root
Jan 05 20:18:02 compute-0 sudo[30613]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpynzxniglavnfkjykxfygmtomuzrgns ; /usr/bin/python3'
Jan 05 20:18:02 compute-0 sudo[30613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:18:03 compute-0 python3[30615]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767644281.2673984-34025-122459528252298/source mode=0755 _original_basename=delorean-antelope-testing.repo follow=False checksum=4ebc56dead962b5d40b8d420dad43b948b84d3fc backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:18:03 compute-0 sudo[30613]: pam_unix(sudo:session): session closed for user root
Jan 05 20:18:03 compute-0 sudo[30639]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egrfmccmhenzwexwyzmhgaloedugjzvf ; /usr/bin/python3'
Jan 05 20:18:03 compute-0 sudo[30639]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:18:03 compute-0 python3[30641]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-highavailability.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:18:03 compute-0 sudo[30639]: pam_unix(sudo:session): session closed for user root
Jan 05 20:18:03 compute-0 sudo[30712]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ewoiwrywnssshpcnvuebmjwwertivnek ; /usr/bin/python3'
Jan 05 20:18:03 compute-0 sudo[30712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:18:03 compute-0 python3[30714]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767644281.2673984-34025-122459528252298/source mode=0755 _original_basename=repo-setup-centos-highavailability.repo follow=False checksum=55d0f695fd0d8f47cbc3044ce0dcf5f88862490f backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:18:03 compute-0 sudo[30712]: pam_unix(sudo:session): session closed for user root
Jan 05 20:18:03 compute-0 sudo[30738]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phgzrkqvplbluadrunjlkiauluvtexob ; /usr/bin/python3'
Jan 05 20:18:03 compute-0 sudo[30738]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:18:04 compute-0 python3[30740]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-powertools.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:18:04 compute-0 sudo[30738]: pam_unix(sudo:session): session closed for user root
Jan 05 20:18:04 compute-0 sudo[30811]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjjgelnfnoocfvpcfaofmegthwasdsym ; /usr/bin/python3'
Jan 05 20:18:04 compute-0 sudo[30811]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:18:04 compute-0 python3[30813]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767644281.2673984-34025-122459528252298/source mode=0755 _original_basename=repo-setup-centos-powertools.repo follow=False checksum=4b0cf99aa89c5c5be0151545863a7a7568f67568 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:18:04 compute-0 sudo[30811]: pam_unix(sudo:session): session closed for user root
Jan 05 20:18:04 compute-0 sudo[30837]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlzybtuafwjkjtpeiaxmdtabnodiggjw ; /usr/bin/python3'
Jan 05 20:18:04 compute-0 sudo[30837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:18:04 compute-0 python3[30839]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-appstream.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:18:04 compute-0 sudo[30837]: pam_unix(sudo:session): session closed for user root
Jan 05 20:18:05 compute-0 sudo[30910]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erjlykdhxgcyhjjwecesckfwnspsdfol ; /usr/bin/python3'
Jan 05 20:18:05 compute-0 sudo[30910]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:18:05 compute-0 python3[30912]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767644281.2673984-34025-122459528252298/source mode=0755 _original_basename=repo-setup-centos-appstream.repo follow=False checksum=e89244d2503b2996429dda1857290c1e91e393a1 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:18:05 compute-0 sudo[30910]: pam_unix(sudo:session): session closed for user root
Jan 05 20:18:05 compute-0 sudo[30936]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lltougbvieycymcwjaiesgphacbtnryv ; /usr/bin/python3'
Jan 05 20:18:05 compute-0 sudo[30936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:18:05 compute-0 python3[30938]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/repo-setup-centos-baseos.repo follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:18:05 compute-0 sudo[30936]: pam_unix(sudo:session): session closed for user root
Jan 05 20:18:05 compute-0 sudo[31009]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gborcmtuoghybgsrcenwbwmkmhhqhcdu ; /usr/bin/python3'
Jan 05 20:18:05 compute-0 sudo[31009]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:18:06 compute-0 python3[31011]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767644281.2673984-34025-122459528252298/source mode=0755 _original_basename=repo-setup-centos-baseos.repo follow=False checksum=36d926db23a40dbfa5c84b5e4d43eac6fa2301d6 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:18:06 compute-0 sudo[31009]: pam_unix(sudo:session): session closed for user root
Jan 05 20:18:06 compute-0 sudo[31035]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tactnlrgzfqemiozhvavsakrzemkiddf ; /usr/bin/python3'
Jan 05 20:18:06 compute-0 sudo[31035]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:18:06 compute-0 python3[31037]: ansible-ansible.legacy.stat Invoked with path=/etc/yum.repos.d/delorean.repo.md5 follow=False get_checksum=True checksum_algorithm=sha1 get_md5=False get_mime=True get_attributes=True
Jan 05 20:18:06 compute-0 sudo[31035]: pam_unix(sudo:session): session closed for user root
Jan 05 20:18:06 compute-0 sudo[31108]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mtystiaifdfuzujscytuwyjwyegulxni ; /usr/bin/python3'
Jan 05 20:18:06 compute-0 sudo[31108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:18:06 compute-0 python3[31110]: ansible-ansible.legacy.copy Invoked with dest=/etc/yum.repos.d/ src=/home/zuul/.ansible/tmp/ansible-tmp-1767644281.2673984-34025-122459528252298/source mode=0755 _original_basename=delorean.repo.md5 follow=False checksum=2583a70b3ee76a9837350b0837bc004a8e52405c backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None remote_src=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:18:06 compute-0 sudo[31108]: pam_unix(sudo:session): session closed for user root
Jan 05 20:18:09 compute-0 sshd-session[31135]: Connection closed by 192.168.122.11 port 51012 [preauth]
Jan 05 20:18:09 compute-0 sshd-session[31136]: Connection closed by 192.168.122.11 port 51020 [preauth]
Jan 05 20:18:09 compute-0 sshd-session[31137]: Unable to negotiate with 192.168.122.11 port 51022: no matching host key type found. Their offer: ssh-ed25519 [preauth]
Jan 05 20:18:09 compute-0 sshd-session[31138]: Unable to negotiate with 192.168.122.11 port 51028: no matching host key type found. Their offer: sk-ecdsa-sha2-nistp256@openssh.com [preauth]
Jan 05 20:18:09 compute-0 sshd-session[31139]: Unable to negotiate with 192.168.122.11 port 51034: no matching host key type found. Their offer: sk-ssh-ed25519@openssh.com [preauth]
Jan 05 20:20:54 compute-0 python3[31168]: ansible-ansible.legacy.command Invoked with _raw_params=hostname _uses_shell=False stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:25:53 compute-0 sshd-session[30254]: Received disconnect from 38.102.83.164 port 38116:11: disconnected by user
Jan 05 20:25:53 compute-0 sshd-session[30254]: Disconnected from user zuul 38.102.83.164 port 38116
Jan 05 20:25:53 compute-0 sshd-session[30251]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:25:53 compute-0 systemd[1]: session-6.scope: Deactivated successfully.
Jan 05 20:25:53 compute-0 systemd[1]: session-6.scope: Consumed 6.394s CPU time.
Jan 05 20:25:53 compute-0 systemd-logind[788]: Session 6 logged out. Waiting for processes to exit.
Jan 05 20:25:53 compute-0 systemd-logind[788]: Removed session 6.
Jan 05 20:27:29 compute-0 sshd-session[31172]: Connection closed by 36.255.220.229 port 35754
Jan 05 20:31:21 compute-0 systemd[1]: Starting dnf makecache...
Jan 05 20:31:21 compute-0 dnf[31174]: Failed determining last makecache time.
Jan 05 20:31:22 compute-0 dnf[31174]: delorean-openstack-barbican-42b4c41831408a8e323 123 kB/s |  13 kB     00:00
Jan 05 20:31:22 compute-0 dnf[31174]: delorean-python-glean-10df0bd91b9bc5c9fd9cc02d7 831 kB/s |  65 kB     00:00
Jan 05 20:31:22 compute-0 dnf[31174]: delorean-openstack-cinder-1c00d6490d88e436f26ef 329 kB/s |  32 kB     00:00
Jan 05 20:31:22 compute-0 dnf[31174]: delorean-python-stevedore-c4acc5639fd2329372142 1.7 MB/s | 131 kB     00:00
Jan 05 20:31:22 compute-0 dnf[31174]: delorean-python-cloudkitty-tests-tempest-2c80f8 231 kB/s |  32 kB     00:00
Jan 05 20:31:22 compute-0 dnf[31174]: delorean-os-refresh-config-9bfc52b5049be2d8de61 8.2 MB/s | 349 kB     00:00
Jan 05 20:31:22 compute-0 dnf[31174]: delorean-openstack-nova-6f8decf0b4f1aa2e96292b6 840 kB/s |  42 kB     00:00
Jan 05 20:31:22 compute-0 dnf[31174]: delorean-python-designate-tests-tempest-347fdbc 951 kB/s |  18 kB     00:00
Jan 05 20:31:22 compute-0 dnf[31174]: delorean-openstack-glance-1fd12c29b339f30fe823e 311 kB/s |  18 kB     00:00
Jan 05 20:31:23 compute-0 dnf[31174]: delorean-openstack-keystone-e4b40af0ae3698fbbbb 175 kB/s |  29 kB     00:00
Jan 05 20:31:23 compute-0 dnf[31174]: delorean-openstack-manila-3c01b7181572c95dac462 1.1 MB/s |  25 kB     00:00
Jan 05 20:31:23 compute-0 dnf[31174]: delorean-python-whitebox-neutron-tests-tempest- 6.6 MB/s | 154 kB     00:00
Jan 05 20:31:23 compute-0 dnf[31174]: delorean-openstack-octavia-ba397f07a7331190208c 286 kB/s |  26 kB     00:00
Jan 05 20:31:23 compute-0 dnf[31174]: delorean-openstack-watcher-c014f81a8647287f6dcc 235 kB/s |  16 kB     00:00
Jan 05 20:31:23 compute-0 dnf[31174]: delorean-ansible-config_template-5ccaa22121a7ff  88 kB/s | 7.4 kB     00:00
Jan 05 20:31:23 compute-0 dnf[31174]: delorean-puppet-ceph-7352068d7b8c84ded636ab3158 1.5 MB/s | 144 kB     00:00
Jan 05 20:31:23 compute-0 dnf[31174]: delorean-openstack-swift-dc98a8463506ac520c469a 687 kB/s |  14 kB     00:00
Jan 05 20:31:23 compute-0 dnf[31174]: delorean-python-tempestconf-8515371b7cceebd4282 2.7 MB/s |  53 kB     00:00
Jan 05 20:31:23 compute-0 dnf[31174]: delorean-openstack-heat-ui-013accbfd179753bc3f0 506 kB/s |  96 kB     00:00
Jan 05 20:31:24 compute-0 dnf[31174]: CentOS Stream 9 - BaseOS                         44 kB/s | 6.7 kB     00:00
Jan 05 20:31:24 compute-0 dnf[31174]: CentOS Stream 9 - AppStream                      71 kB/s | 6.8 kB     00:00
Jan 05 20:31:24 compute-0 dnf[31174]: CentOS Stream 9 - CRB                            45 kB/s | 6.6 kB     00:00
Jan 05 20:31:24 compute-0 dnf[31174]: CentOS Stream 9 - Extras packages                31 kB/s | 7.3 kB     00:00
Jan 05 20:31:25 compute-0 dnf[31174]: dlrn-antelope-testing                           4.4 MB/s | 1.1 MB     00:00
Jan 05 20:31:25 compute-0 dnf[31174]: dlrn-antelope-build-deps                         16 MB/s | 461 kB     00:00
Jan 05 20:31:25 compute-0 dnf[31174]: centos9-rabbitmq                                2.3 MB/s | 123 kB     00:00
Jan 05 20:31:25 compute-0 dnf[31174]: centos9-storage                                 3.1 MB/s | 415 kB     00:00
Jan 05 20:31:26 compute-0 dnf[31174]: centos9-opstools                                4.3 MB/s |  51 kB     00:00
Jan 05 20:31:26 compute-0 dnf[31174]: NFV SIG OpenvSwitch                              22 MB/s | 461 kB     00:00
Jan 05 20:31:26 compute-0 dnf[31174]: repo-setup-centos-appstream                      73 MB/s |  26 MB     00:00
Jan 05 20:31:32 compute-0 dnf[31174]: repo-setup-centos-baseos                         71 MB/s | 8.8 MB     00:00
Jan 05 20:31:34 compute-0 dnf[31174]: repo-setup-centos-highavailability               27 MB/s | 744 kB     00:00
Jan 05 20:31:34 compute-0 dnf[31174]: repo-setup-centos-powertools                     67 MB/s | 7.4 MB     00:00
Jan 05 20:31:36 compute-0 dnf[31174]: Extra Packages for Enterprise Linux 9 - x86_64   31 MB/s |  20 MB     00:00
Jan 05 20:31:52 compute-0 dnf[31174]: Metadata cache created.
Jan 05 20:31:52 compute-0 systemd[1]: dnf-makecache.service: Deactivated successfully.
Jan 05 20:31:52 compute-0 systemd[1]: Finished dnf makecache.
Jan 05 20:31:52 compute-0 systemd[1]: dnf-makecache.service: Consumed 27.283s CPU time.
Jan 05 20:33:33 compute-0 sshd-session[31276]: Accepted publickey for zuul from 192.168.122.30 port 50806 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:33:33 compute-0 systemd-logind[788]: New session 7 of user zuul.
Jan 05 20:33:33 compute-0 systemd[1]: Started Session 7 of User zuul.
Jan 05 20:33:33 compute-0 sshd-session[31276]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:33:34 compute-0 python3.9[31429]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:33:36 compute-0 sudo[31608]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oosnaxbfyfuptuklkamxytdiidsrxwou ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645216.3866367-32-72043498975274/AnsiballZ_command.py'
Jan 05 20:33:36 compute-0 sudo[31608]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:33:37 compute-0 python3.9[31610]: ansible-ansible.legacy.command Invoked with _raw_params=set -euxo pipefail
                                            pushd /var/tmp
                                            curl -sL https://github.com/openstack-k8s-operators/repo-setup/archive/refs/heads/main.tar.gz | tar -xz
                                            pushd repo-setup-main
                                            python3 -m venv ./venv
                                            PBR_VERSION=0.0.0 ./venv/bin/pip install ./
                                            ./venv/bin/repo-setup current-podified -b antelope
                                            popd
                                            rm -rf repo-setup-main
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:33:44 compute-0 sudo[31608]: pam_unix(sudo:session): session closed for user root
Jan 05 20:33:45 compute-0 sshd-session[31279]: Connection closed by 192.168.122.30 port 50806
Jan 05 20:33:45 compute-0 sshd-session[31276]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:33:45 compute-0 systemd[1]: session-7.scope: Deactivated successfully.
Jan 05 20:33:45 compute-0 systemd[1]: session-7.scope: Consumed 8.733s CPU time.
Jan 05 20:33:45 compute-0 systemd-logind[788]: Session 7 logged out. Waiting for processes to exit.
Jan 05 20:33:45 compute-0 systemd-logind[788]: Removed session 7.
Jan 05 20:33:51 compute-0 sshd-session[31667]: Accepted publickey for zuul from 192.168.122.30 port 37618 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:33:51 compute-0 systemd-logind[788]: New session 8 of user zuul.
Jan 05 20:33:51 compute-0 systemd[1]: Started Session 8 of User zuul.
Jan 05 20:33:51 compute-0 sshd-session[31667]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:33:52 compute-0 python3.9[31821]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:33:52 compute-0 sshd-session[31671]: Connection closed by 192.168.122.30 port 37618
Jan 05 20:33:52 compute-0 sshd-session[31667]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:33:52 compute-0 systemd[1]: session-8.scope: Deactivated successfully.
Jan 05 20:33:52 compute-0 systemd-logind[788]: Session 8 logged out. Waiting for processes to exit.
Jan 05 20:33:52 compute-0 systemd-logind[788]: Removed session 8.
Jan 05 20:34:08 compute-0 sshd-session[31850]: Accepted publickey for zuul from 192.168.122.30 port 50000 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:34:08 compute-0 systemd-logind[788]: New session 9 of user zuul.
Jan 05 20:34:08 compute-0 systemd[1]: Started Session 9 of User zuul.
Jan 05 20:34:08 compute-0 sshd-session[31850]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:34:09 compute-0 python3.9[32003]: ansible-ansible.legacy.ping Invoked with data=pong
Jan 05 20:34:10 compute-0 python3.9[32177]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:34:11 compute-0 sudo[32327]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnvmzbglygmgsanuomsfrmidrwdtnymw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645250.6709452-45-199449811953077/AnsiballZ_command.py'
Jan 05 20:34:11 compute-0 sudo[32327]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:34:11 compute-0 python3.9[32329]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:34:11 compute-0 sudo[32327]: pam_unix(sudo:session): session closed for user root
Jan 05 20:34:12 compute-0 sudo[32480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcbagpiaceaxjdamltcykckbrjggcjly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645251.697866-57-184275191969757/AnsiballZ_stat.py'
Jan 05 20:34:12 compute-0 sudo[32480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:34:12 compute-0 python3.9[32482]: ansible-ansible.builtin.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:34:12 compute-0 sudo[32480]: pam_unix(sudo:session): session closed for user root
Jan 05 20:34:13 compute-0 sudo[32632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tpmyhyxxqsdzcsxhddvdpuvehndluosk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645252.6056726-65-70111836404378/AnsiballZ_file.py'
Jan 05 20:34:13 compute-0 sudo[32632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:34:13 compute-0 python3.9[32634]: ansible-ansible.builtin.file Invoked with mode=755 path=/etc/ansible/facts.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:34:13 compute-0 sudo[32632]: pam_unix(sudo:session): session closed for user root
Jan 05 20:34:13 compute-0 sudo[32784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwwgbtvkphwrqjcayfwlcbmfivrlsirk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645253.5424893-73-126564884870959/AnsiballZ_stat.py'
Jan 05 20:34:13 compute-0 sudo[32784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:34:14 compute-0 python3.9[32786]: ansible-ansible.legacy.stat Invoked with path=/etc/ansible/facts.d/bootc.fact follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:34:14 compute-0 sudo[32784]: pam_unix(sudo:session): session closed for user root
Jan 05 20:34:14 compute-0 sudo[32907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnhmrlwgjcnrbspxfwbpixccfxgfqyye ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645253.5424893-73-126564884870959/AnsiballZ_copy.py'
Jan 05 20:34:14 compute-0 sudo[32907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:34:15 compute-0 python3.9[32909]: ansible-ansible.legacy.copy Invoked with dest=/etc/ansible/facts.d/bootc.fact mode=755 src=/home/zuul/.ansible/tmp/ansible-tmp-1767645253.5424893-73-126564884870959/.source.fact _original_basename=bootc.fact follow=False checksum=eb4122ce7fc50a38407beb511c4ff8c178005b12 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:34:15 compute-0 sudo[32907]: pam_unix(sudo:session): session closed for user root
Jan 05 20:34:15 compute-0 sudo[33059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlmqcopvbxdhdceunqwhboblbubtdzxp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645255.398235-88-66574599666735/AnsiballZ_setup.py'
Jan 05 20:34:15 compute-0 sudo[33059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:34:16 compute-0 python3.9[33061]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:34:16 compute-0 sudo[33059]: pam_unix(sudo:session): session closed for user root
Jan 05 20:34:16 compute-0 sudo[33215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usuxusibbxpxlxfkxupyxwenrpzknpvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645256.508983-96-262878048713432/AnsiballZ_file.py'
Jan 05 20:34:16 compute-0 sudo[33215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:34:17 compute-0 python3.9[33217]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/log/journal setype=var_log_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:34:17 compute-0 sudo[33215]: pam_unix(sudo:session): session closed for user root
Jan 05 20:34:17 compute-0 sudo[33367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akkxwobnporngyuxghwbatduzkciwgkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645257.4222512-105-221037936411048/AnsiballZ_file.py'
Jan 05 20:34:17 compute-0 sudo[33367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:34:17 compute-0 python3.9[33369]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/config-data/ansible-generated recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:34:17 compute-0 sudo[33367]: pam_unix(sudo:session): session closed for user root
Jan 05 20:34:18 compute-0 python3.9[33519]: ansible-ansible.builtin.service_facts Invoked
Jan 05 20:34:24 compute-0 python3.9[33772]: ansible-ansible.builtin.lineinfile Invoked with line=cloud-init=disabled path=/proc/cmdline state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:34:25 compute-0 python3.9[33922]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:34:26 compute-0 python3.9[34076]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:34:27 compute-0 sudo[34232]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qtnymhqnlvyqdxoukmgmgfkhbzdksefr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645267.3237348-153-122969016787015/AnsiballZ_setup.py'
Jan 05 20:34:27 compute-0 sudo[34232]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:34:27 compute-0 python3.9[34234]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 20:34:28 compute-0 sudo[34232]: pam_unix(sudo:session): session closed for user root
Jan 05 20:34:28 compute-0 sudo[34316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mkylrlkiaolloyotgyltbfjxdwckzyxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645267.3237348-153-122969016787015/AnsiballZ_dnf.py'
Jan 05 20:34:28 compute-0 sudo[34316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:34:28 compute-0 python3.9[34318]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:35:22 compute-0 systemd[1]: Reloading.
Jan 05 20:35:22 compute-0 systemd-rc-local-generator[34587]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:35:23 compute-0 systemd[1]: Listening on Device-mapper event daemon FIFOs.
Jan 05 20:35:23 compute-0 systemd[1]: Reloading.
Jan 05 20:35:23 compute-0 systemd-rc-local-generator[34633]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:35:23 compute-0 systemd[1]: Starting Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling...
Jan 05 20:35:23 compute-0 systemd[1]: Finished Monitoring of LVM2 mirrors, snapshots etc. using dmeventd or progress polling.
Jan 05 20:35:23 compute-0 systemd[1]: Reloading.
Jan 05 20:35:23 compute-0 systemd-rc-local-generator[34670]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:35:23 compute-0 systemd[1]: Listening on LVM2 poll daemon socket.
Jan 05 20:35:24 compute-0 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Jan 05 20:35:24 compute-0 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Jan 05 20:35:24 compute-0 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Jan 05 20:36:32 compute-0 kernel: SELinux:  Converting 2721 SID table entries...
Jan 05 20:36:32 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 20:36:32 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 05 20:36:32 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 20:36:32 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 05 20:36:32 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 20:36:32 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 20:36:32 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 20:36:32 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=6 res=1
Jan 05 20:36:33 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 20:36:33 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 05 20:36:33 compute-0 systemd[1]: Reloading.
Jan 05 20:36:33 compute-0 systemd-rc-local-generator[35005]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:36:33 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 20:36:33 compute-0 sudo[34316]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:34 compute-0 sudo[35814]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ncdaafpeqfdapocmpvcpktifzpcrcfkq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645393.989131-165-22829215029590/AnsiballZ_command.py'
Jan 05 20:36:34 compute-0 sudo[35814]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:34 compute-0 python3.9[35844]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:36:34 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 20:36:34 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 20:36:34 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.740s CPU time.
Jan 05 20:36:34 compute-0 systemd[1]: run-rcc504fdd4adb4044b2fcf1310bf4c662.service: Deactivated successfully.
Jan 05 20:36:35 compute-0 sudo[35814]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:36 compute-0 sudo[36197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixfaexljkpcgowbhmuuhgfygdpftnsxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645395.8104181-173-76870682259715/AnsiballZ_selinux.py'
Jan 05 20:36:36 compute-0 sudo[36197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:36 compute-0 python3.9[36199]: ansible-ansible.posix.selinux Invoked with policy=targeted state=enforcing configfile=/etc/selinux/config update_kernel_param=False
Jan 05 20:36:36 compute-0 sudo[36197]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:37 compute-0 sudo[36349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilnxxynwompiynwparvqjqggexylkifx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645397.371846-184-60250354015142/AnsiballZ_command.py'
Jan 05 20:36:37 compute-0 sudo[36349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:38 compute-0 python3.9[36351]: ansible-ansible.legacy.command Invoked with cmd=dd if=/dev/zero of=/swap count=1024 bs=1M creates=/swap _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None removes=None stdin=None
Jan 05 20:36:39 compute-0 sudo[36349]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:40 compute-0 sudo[36502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zccrogpwdjkgppnlikurqmzheawzhwat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645399.8468945-192-250364996668410/AnsiballZ_file.py'
Jan 05 20:36:40 compute-0 sudo[36502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:41 compute-0 python3.9[36504]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/swap recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:36:41 compute-0 sudo[36502]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:42 compute-0 sudo[36654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ximdoeiunvbjzabzdriigwstpbnyxtcr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645401.467872-200-150676790003358/AnsiballZ_mount.py'
Jan 05 20:36:42 compute-0 sudo[36654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:42 compute-0 python3.9[36656]: ansible-ansible.posix.mount Invoked with dump=0 fstype=swap name=none opts=sw passno=0 src=/swap state=present path=none boot=True opts_no_log=False backup=False fstab=None
Jan 05 20:36:42 compute-0 sudo[36654]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:43 compute-0 sudo[36806]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzhiwemaqruibmeswzlrlcjavvpltmxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645403.2438843-228-53755944546974/AnsiballZ_file.py'
Jan 05 20:36:43 compute-0 sudo[36806]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:43 compute-0 python3.9[36808]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/ca-trust/source/anchors setype=cert_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:36:43 compute-0 sudo[36806]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:44 compute-0 sudo[36958]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwokftgtkesiqpzjyneieeiqbanlviar ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645404.0068474-236-222032118221456/AnsiballZ_stat.py'
Jan 05 20:36:44 compute-0 sudo[36958]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:44 compute-0 python3.9[36960]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:36:44 compute-0 sudo[36958]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:45 compute-0 sudo[37081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uurgokprseurefkwqljkvozrciaaxnil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645404.0068474-236-222032118221456/AnsiballZ_copy.py'
Jan 05 20:36:45 compute-0 sudo[37081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:47 compute-0 python3.9[37083]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/ca-trust/source/anchors/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645404.0068474-236-222032118221456/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=24212b8f56b88835433cd55368c431a44259c040 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:36:47 compute-0 sudo[37081]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:48 compute-0 sudo[37233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aykygjprkjyflsbvnhpfmfoupvnawdfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645407.7160456-260-205289777062168/AnsiballZ_stat.py'
Jan 05 20:36:48 compute-0 sudo[37233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:50 compute-0 python3.9[37235]: ansible-ansible.builtin.stat Invoked with path=/etc/lvm/devices/system.devices follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:36:50 compute-0 sudo[37233]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:51 compute-0 sudo[37385]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnjdwfpcjnxaodesvpchoxwrzthxxmos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645410.7792492-268-7519483295589/AnsiballZ_command.py'
Jan 05 20:36:51 compute-0 sudo[37385]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:51 compute-0 python3.9[37387]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/vgimportdevices --all _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:36:51 compute-0 sudo[37385]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:52 compute-0 sudo[37538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hdzlfhxogelqpycfssftzusmgszfvvba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645411.7974477-276-18185218445167/AnsiballZ_file.py'
Jan 05 20:36:52 compute-0 sudo[37538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:52 compute-0 python3.9[37540]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/lvm/devices/system.devices state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:36:52 compute-0 sudo[37538]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:53 compute-0 sudo[37690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsomkphmvwavwdpvbtphtqhezspwpekq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645412.7688596-287-69626576691712/AnsiballZ_getent.py'
Jan 05 20:36:53 compute-0 sudo[37690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:53 compute-0 python3.9[37692]: ansible-ansible.builtin.getent Invoked with database=passwd key=qemu fail_key=True service=None split=None
Jan 05 20:36:53 compute-0 sudo[37690]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:53 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 20:36:53 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 20:36:54 compute-0 sudo[37844]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-plzbfvtgcrhxlxonsoimkceyetdccquh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645413.682936-295-225192535371417/AnsiballZ_group.py'
Jan 05 20:36:54 compute-0 sudo[37844]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:54 compute-0 python3.9[37846]: ansible-ansible.builtin.group Invoked with gid=107 name=qemu state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 05 20:36:54 compute-0 groupadd[37847]: group added to /etc/group: name=qemu, GID=107
Jan 05 20:36:54 compute-0 groupadd[37847]: group added to /etc/gshadow: name=qemu
Jan 05 20:36:54 compute-0 groupadd[37847]: new group: name=qemu, GID=107
Jan 05 20:36:54 compute-0 sudo[37844]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:55 compute-0 sudo[38002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kzofxlkgtotkmiyknnevsdzclbsxhcni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645414.7712765-303-78733759968304/AnsiballZ_user.py'
Jan 05 20:36:55 compute-0 sudo[38002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:55 compute-0 python3.9[38004]: ansible-ansible.builtin.user Invoked with comment=qemu user group=qemu groups=[''] name=qemu shell=/sbin/nologin state=present uid=107 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 05 20:36:55 compute-0 useradd[38006]: new user: name=qemu, UID=107, GID=107, home=/home/qemu, shell=/sbin/nologin, from=/dev/pts/0
Jan 05 20:36:55 compute-0 sudo[38002]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:56 compute-0 sudo[38162]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-capfxabtjucqukygxkufpsakylwrvona ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645416.0089636-311-197178986040277/AnsiballZ_getent.py'
Jan 05 20:36:56 compute-0 sudo[38162]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:56 compute-0 python3.9[38164]: ansible-ansible.builtin.getent Invoked with database=passwd key=hugetlbfs fail_key=True service=None split=None
Jan 05 20:36:56 compute-0 sudo[38162]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:57 compute-0 sudo[38315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvggzfjqwyujjjrrwawtpjcculzqdxug ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645416.8062265-319-275663872267337/AnsiballZ_group.py'
Jan 05 20:36:57 compute-0 sudo[38315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:57 compute-0 python3.9[38317]: ansible-ansible.builtin.group Invoked with gid=42477 name=hugetlbfs state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 05 20:36:57 compute-0 groupadd[38318]: group added to /etc/group: name=hugetlbfs, GID=42477
Jan 05 20:36:57 compute-0 groupadd[38318]: group added to /etc/gshadow: name=hugetlbfs
Jan 05 20:36:57 compute-0 groupadd[38318]: new group: name=hugetlbfs, GID=42477
Jan 05 20:36:57 compute-0 sudo[38315]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:58 compute-0 sudo[38473]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmtnmfivesdokeqyhxtwolmrrsyqcdtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645417.7422564-328-10883121904160/AnsiballZ_file.py'
Jan 05 20:36:58 compute-0 sudo[38473]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:58 compute-0 python3.9[38475]: ansible-ansible.builtin.file Invoked with group=qemu mode=0755 owner=qemu path=/var/lib/vhost_sockets setype=virt_cache_t seuser=system_u state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None serole=None selevel=None attributes=None
Jan 05 20:36:58 compute-0 sudo[38473]: pam_unix(sudo:session): session closed for user root
Jan 05 20:36:59 compute-0 sudo[38625]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yoomqbqlifaedonqcxkqdmcynsfnjcuk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645418.800264-339-84167402457523/AnsiballZ_dnf.py'
Jan 05 20:36:59 compute-0 sudo[38625]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:36:59 compute-0 python3.9[38627]: ansible-ansible.legacy.dnf Invoked with name=['dracut-config-generic'] state=absent allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:37:01 compute-0 sudo[38625]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:01 compute-0 sudo[38778]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpnbofczydsppnbedblexavtihciokii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645421.4810598-347-144971769867761/AnsiballZ_file.py'
Jan 05 20:37:01 compute-0 sudo[38778]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:02 compute-0 python3.9[38780]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/modules-load.d setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:37:02 compute-0 sudo[38778]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:02 compute-0 sudo[38930]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwqdgdntjtspkhccyajtfslvykesvbje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645422.225805-355-79291985132556/AnsiballZ_stat.py'
Jan 05 20:37:02 compute-0 sudo[38930]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:02 compute-0 python3.9[38932]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:37:02 compute-0 sudo[38930]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:03 compute-0 sudo[39053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xcbdlyziipdnpwhdfguqzhfwzlnngbjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645422.225805-355-79291985132556/AnsiballZ_copy.py'
Jan 05 20:37:03 compute-0 sudo[39053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:03 compute-0 python3.9[39055]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767645422.225805-355-79291985132556/.source.conf follow=False _original_basename=edpm-modprobe.conf.j2 checksum=8021efe01721d8fa8cab46b95c00ec1be6dbb9d0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:37:03 compute-0 sudo[39053]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:04 compute-0 sudo[39205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-exxewzzmixzdszfzsrjkzgvwyoxqzssy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645423.7777147-370-93853281703895/AnsiballZ_systemd.py'
Jan 05 20:37:04 compute-0 sudo[39205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:04 compute-0 python3.9[39207]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:37:04 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 05 20:37:04 compute-0 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this.
Jan 05 20:37:04 compute-0 kernel: Bridge firewalling registered
Jan 05 20:37:04 compute-0 systemd-modules-load[39211]: Inserted module 'br_netfilter'
Jan 05 20:37:04 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 05 20:37:04 compute-0 sudo[39205]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:05 compute-0 sudo[39365]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggntiisnotnjquunbfzmfwlzljzentjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645425.1251063-378-156158037877197/AnsiballZ_stat.py'
Jan 05 20:37:05 compute-0 sudo[39365]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:05 compute-0 python3.9[39367]: ansible-ansible.legacy.stat Invoked with path=/etc/sysctl.d/99-edpm.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:37:05 compute-0 sudo[39365]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:06 compute-0 sudo[39488]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekgtgrwilxyjsqmqeslzspspmnaumqgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645425.1251063-378-156158037877197/AnsiballZ_copy.py'
Jan 05 20:37:06 compute-0 sudo[39488]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:06 compute-0 python3.9[39490]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysctl.d/99-edpm.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767645425.1251063-378-156158037877197/.source.conf follow=False _original_basename=edpm-sysctl.conf.j2 checksum=2a366439721b855adcfe4d7f152babb68596a007 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:37:06 compute-0 sudo[39488]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:06 compute-0 sudo[39640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjpivkgmfszhujwfnpnufnkrxhcteznl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645426.6852455-396-52126089169068/AnsiballZ_dnf.py'
Jan 05 20:37:06 compute-0 sudo[39640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:07 compute-0 python3.9[39642]: ansible-ansible.legacy.dnf Invoked with name=['tuned', 'tuned-profiles-cpu-partitioning'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:37:14 compute-0 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Jan 05 20:37:14 compute-0 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Jan 05 20:37:14 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 20:37:14 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 05 20:37:14 compute-0 systemd[1]: Reloading.
Jan 05 20:37:15 compute-0 systemd-rc-local-generator[39720]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:37:15 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 20:37:15 compute-0 sudo[39640]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:16 compute-0 python3.9[40857]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/active_profile follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:37:17 compute-0 python3.9[41690]: ansible-ansible.builtin.slurp Invoked with src=/etc/tuned/active_profile
Jan 05 20:37:18 compute-0 python3.9[42399]: ansible-ansible.builtin.stat Invoked with path=/etc/tuned/throughput-performance-variables.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:37:19 compute-0 sudo[43241]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzyzltyafpaczjurxtunpufleialcfmt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645438.6307728-435-102200705410775/AnsiballZ_command.py'
Jan 05 20:37:19 compute-0 sudo[43241]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:19 compute-0 python3.9[43271]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/tuned-adm profile throughput-performance _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:37:19 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 05 20:37:19 compute-0 systemd[1]: Starting Authorization Manager...
Jan 05 20:37:19 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 05 20:37:19 compute-0 polkitd[44036]: Started polkitd version 0.117
Jan 05 20:37:19 compute-0 polkitd[44036]: Loading rules from directory /etc/polkit-1/rules.d
Jan 05 20:37:19 compute-0 polkitd[44036]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 05 20:37:19 compute-0 polkitd[44036]: Finished loading, compiling and executing 2 rules
Jan 05 20:37:19 compute-0 polkitd[44036]: Acquired the name org.freedesktop.PolicyKit1 on the system bus
Jan 05 20:37:19 compute-0 systemd[1]: Started Authorization Manager.
Jan 05 20:37:20 compute-0 sudo[43241]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:20 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 20:37:20 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 20:37:20 compute-0 systemd[1]: man-db-cache-update.service: Consumed 6.526s CPU time.
Jan 05 20:37:20 compute-0 systemd[1]: run-rc3f114f12500409aa5c76c0925332048.service: Deactivated successfully.
Jan 05 20:37:20 compute-0 sudo[44205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjpmuzsedzwuhsywkqfzrgtjzlnrklfi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645440.2364726-444-116604973296323/AnsiballZ_systemd.py'
Jan 05 20:37:20 compute-0 sudo[44205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:20 compute-0 python3.9[44207]: ansible-ansible.builtin.systemd Invoked with enabled=True name=tuned state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:37:20 compute-0 systemd[1]: Stopping Dynamic System Tuning Daemon...
Jan 05 20:37:21 compute-0 systemd[1]: tuned.service: Deactivated successfully.
Jan 05 20:37:21 compute-0 systemd[1]: Stopped Dynamic System Tuning Daemon.
Jan 05 20:37:21 compute-0 systemd[1]: Starting Dynamic System Tuning Daemon...
Jan 05 20:37:21 compute-0 systemd[1]: Started Dynamic System Tuning Daemon.
Jan 05 20:37:21 compute-0 sudo[44205]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:21 compute-0 python3.9[44368]: ansible-ansible.builtin.slurp Invoked with src=/proc/cmdline
Jan 05 20:37:23 compute-0 sudo[44518]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iccmoplqcrigycbpmvcjhaxfxdqmgwdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645443.5500903-501-200648816983767/AnsiballZ_systemd.py'
Jan 05 20:37:23 compute-0 sudo[44518]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:24 compute-0 python3.9[44520]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksm.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:37:24 compute-0 systemd[1]: Reloading.
Jan 05 20:37:24 compute-0 systemd-rc-local-generator[44547]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:37:24 compute-0 sudo[44518]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:25 compute-0 sudo[44707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhxgmzgmgmpexhtwhqpaqfwzsesclgxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645444.7321024-501-196398988418791/AnsiballZ_systemd.py'
Jan 05 20:37:25 compute-0 sudo[44707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:25 compute-0 python3.9[44709]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ksmtuned.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:37:25 compute-0 systemd[1]: Reloading.
Jan 05 20:37:25 compute-0 systemd-rc-local-generator[44740]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:37:25 compute-0 sudo[44707]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:26 compute-0 sudo[44896]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rlsrlrblazfkttrsibkqqcfcnbrushpt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645446.038456-517-46787021777488/AnsiballZ_command.py'
Jan 05 20:37:26 compute-0 sudo[44896]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:26 compute-0 python3.9[44898]: ansible-ansible.legacy.command Invoked with _raw_params=mkswap "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:37:26 compute-0 sudo[44896]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:27 compute-0 sudo[45049]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqxcjpjqmcjiakhenojgttlxslakpdlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645446.8431187-525-34824645261600/AnsiballZ_command.py'
Jan 05 20:37:27 compute-0 sudo[45049]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:27 compute-0 python3.9[45051]: ansible-ansible.legacy.command Invoked with _raw_params=swapon "/swap" _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:37:27 compute-0 kernel: Adding 1048572k swap on /swap.  Priority:-2 extents:1 across:1048572k 
Jan 05 20:37:27 compute-0 sudo[45049]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:28 compute-0 sudo[45202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqusttwgguydmnklrvbcojzymxkiorxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645447.679394-533-188315677398120/AnsiballZ_command.py'
Jan 05 20:37:28 compute-0 sudo[45202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:28 compute-0 python3.9[45204]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/update-ca-trust _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:37:29 compute-0 sudo[45202]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:30 compute-0 sudo[45364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzitegcttqttfongzqoujrqonqvbbvuq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645450.0218177-541-270863947984200/AnsiballZ_command.py'
Jan 05 20:37:30 compute-0 sudo[45364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:30 compute-0 python3.9[45366]: ansible-ansible.legacy.command Invoked with _raw_params=echo 2 >/sys/kernel/mm/ksm/run _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:37:30 compute-0 sudo[45364]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:31 compute-0 sudo[45517]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfdwnpnuqmcpaapmnioyddregmqkcjmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645450.8856497-549-190882548302414/AnsiballZ_systemd.py'
Jan 05 20:37:31 compute-0 sudo[45517]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:31 compute-0 python3.9[45519]: ansible-ansible.builtin.systemd Invoked with name=systemd-sysctl.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:37:31 compute-0 systemd[1]: systemd-sysctl.service: Deactivated successfully.
Jan 05 20:37:31 compute-0 systemd[1]: Stopped Apply Kernel Variables.
Jan 05 20:37:31 compute-0 systemd[1]: Stopping Apply Kernel Variables...
Jan 05 20:37:31 compute-0 systemd[1]: Starting Apply Kernel Variables...
Jan 05 20:37:31 compute-0 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully.
Jan 05 20:37:31 compute-0 systemd[1]: Finished Apply Kernel Variables.
Jan 05 20:37:31 compute-0 sudo[45517]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:32 compute-0 sshd-session[31853]: Connection closed by 192.168.122.30 port 50000
Jan 05 20:37:32 compute-0 sshd-session[31850]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:37:32 compute-0 systemd[1]: session-9.scope: Deactivated successfully.
Jan 05 20:37:32 compute-0 systemd[1]: session-9.scope: Consumed 2min 28.626s CPU time.
Jan 05 20:37:32 compute-0 systemd-logind[788]: Session 9 logged out. Waiting for processes to exit.
Jan 05 20:37:32 compute-0 systemd-logind[788]: Removed session 9.
Jan 05 20:37:38 compute-0 sshd-session[45549]: Accepted publickey for zuul from 192.168.122.30 port 58330 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:37:38 compute-0 systemd-logind[788]: New session 10 of user zuul.
Jan 05 20:37:38 compute-0 systemd[1]: Started Session 10 of User zuul.
Jan 05 20:37:38 compute-0 sshd-session[45549]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:37:39 compute-0 python3.9[45702]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:37:40 compute-0 python3.9[45856]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:37:41 compute-0 sudo[46010]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dwszoaxoeulynxbwtajmyvoejvvqbvlr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645461.3710604-50-76222059387805/AnsiballZ_command.py'
Jan 05 20:37:41 compute-0 sudo[46010]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:42 compute-0 python3.9[46012]: ansible-ansible.legacy.command Invoked with _raw_params=PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin which growvols
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:37:42 compute-0 sudo[46010]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:43 compute-0 python3.9[46163]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:37:44 compute-0 sudo[46317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dordvndcxjghohrloqbcmslmsquzwvmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645463.620907-70-201538048299356/AnsiballZ_setup.py'
Jan 05 20:37:44 compute-0 sudo[46317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:44 compute-0 python3.9[46319]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 20:37:44 compute-0 sudo[46317]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:45 compute-0 sudo[46401]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjqivkgljczpzafpekpliygpkloytoxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645463.620907-70-201538048299356/AnsiballZ_dnf.py'
Jan 05 20:37:45 compute-0 sudo[46401]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:45 compute-0 python3.9[46403]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:37:46 compute-0 sudo[46401]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:47 compute-0 sudo[46554]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xztunrqextelkbdxkulcroptbekakbff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645466.9812357-82-267194943886988/AnsiballZ_setup.py'
Jan 05 20:37:47 compute-0 sudo[46554]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:47 compute-0 python3.9[46556]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 20:37:47 compute-0 sudo[46554]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:48 compute-0 sudo[46725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkowshdkfvthdewnuigfrhexwcpgvjnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645468.2083323-93-278482505423976/AnsiballZ_file.py'
Jan 05 20:37:48 compute-0 sudo[46725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:49 compute-0 python3.9[46727]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:37:49 compute-0 sudo[46725]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:49 compute-0 sudo[46877]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udejwfsmqrvkekgnbmaikrehqouacaje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645469.2833228-101-156733983618773/AnsiballZ_command.py'
Jan 05 20:37:49 compute-0 sudo[46877]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:49 compute-0 python3.9[46879]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:37:49 compute-0 podman[46880]: 2026-01-05 20:37:49.978906145 +0000 UTC m=+0.065057254 system refresh
Jan 05 20:37:50 compute-0 sudo[46877]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:50 compute-0 sudo[47040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jzgwtxibbvmwypwvbaqiwjtgzjzmlmxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645470.2898169-109-12674638864185/AnsiballZ_stat.py'
Jan 05 20:37:50 compute-0 sudo[47040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:50 compute-0 python3.9[47042]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:37:50 compute-0 sudo[47040]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:37:51 compute-0 sudo[47163]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hodsgvolumrffzflvbrlnybbosqwokil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645470.2898169-109-12674638864185/AnsiballZ_copy.py'
Jan 05 20:37:51 compute-0 sudo[47163]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:51 compute-0 python3.9[47165]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/networks/podman.json group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645470.2898169-109-12674638864185/.source.json follow=False _original_basename=podman_network_config.j2 checksum=7ff7a9760bace41dc8376067e8aa93b70791ae5a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:37:51 compute-0 sudo[47163]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:52 compute-0 sudo[47315]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onxvzzexrsmxzrnzhmwpganmabmgkvkv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645471.9920335-124-231689803382057/AnsiballZ_stat.py'
Jan 05 20:37:52 compute-0 sudo[47315]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:52 compute-0 python3.9[47317]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:37:52 compute-0 sudo[47315]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:53 compute-0 sudo[47438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jntagdrbdngmbmapmmlwurzwqkdzzhtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645471.9920335-124-231689803382057/AnsiballZ_copy.py'
Jan 05 20:37:53 compute-0 sudo[47438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:53 compute-0 python3.9[47440]: ansible-ansible.legacy.copy Invoked with dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf group=root mode=0644 owner=root setype=etc_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767645471.9920335-124-231689803382057/.source.conf follow=False _original_basename=registries.conf.j2 checksum=bd8960d09011f95ec8946d00609d580926fa47cd backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:37:53 compute-0 sudo[47438]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:54 compute-0 sudo[47590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcjhycukmbljmozsduresymvgbclkrub ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645473.463352-140-164159909101797/AnsiballZ_ini_file.py'
Jan 05 20:37:54 compute-0 sudo[47590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:54 compute-0 python3.9[47592]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:37:54 compute-0 sudo[47590]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:54 compute-0 sudo[47742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jksjjqzcrvyovqqifhtuknqqqnwwwxes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645474.424534-140-238971033961703/AnsiballZ_ini_file.py'
Jan 05 20:37:54 compute-0 sudo[47742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:55 compute-0 python3.9[47744]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:37:55 compute-0 sudo[47742]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:55 compute-0 sudo[47894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tzgnwzwxuaunhqoorzjtcqqqucdebjvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645475.2070837-140-196636529010673/AnsiballZ_ini_file.py'
Jan 05 20:37:55 compute-0 sudo[47894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:55 compute-0 python3.9[47896]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:37:55 compute-0 sudo[47894]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:56 compute-0 sudo[48046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-folblmxauytoxlvzmstllabnioofbnij ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645476.0101292-140-238578070230384/AnsiballZ_ini_file.py'
Jan 05 20:37:56 compute-0 sudo[48046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:56 compute-0 python3.9[48048]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:37:56 compute-0 sudo[48046]: pam_unix(sudo:session): session closed for user root
Jan 05 20:37:57 compute-0 python3.9[48198]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:37:58 compute-0 sudo[48350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wtloilqlnuivyeahxbxueqphkpxwkqge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645477.863484-180-278594658422009/AnsiballZ_dnf.py'
Jan 05 20:37:58 compute-0 sudo[48350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:37:58 compute-0 python3.9[48352]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 20:37:59 compute-0 sudo[48350]: pam_unix(sudo:session): session closed for user root
Jan 05 20:38:00 compute-0 sudo[48503]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hoqefiadndocpjyvuouogafyzbtpgtxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645479.9728673-188-157227515089441/AnsiballZ_dnf.py'
Jan 05 20:38:00 compute-0 sudo[48503]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:38:00 compute-0 python3.9[48505]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openstack-network-scripts'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 20:38:02 compute-0 sudo[48503]: pam_unix(sudo:session): session closed for user root
Jan 05 20:38:03 compute-0 sudo[48663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpaxfxhhstqjupvdlsibyppwcmeifzfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645482.806173-198-99050382602906/AnsiballZ_dnf.py'
Jan 05 20:38:03 compute-0 sudo[48663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:38:03 compute-0 python3.9[48665]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['podman', 'buildah'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 20:38:04 compute-0 sudo[48663]: pam_unix(sudo:session): session closed for user root
Jan 05 20:38:05 compute-0 sudo[48816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qgngesrxikauxffgeuyzbdmdtsfofoqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645484.9927936-207-215329647460183/AnsiballZ_dnf.py'
Jan 05 20:38:05 compute-0 sudo[48816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:38:05 compute-0 python3.9[48818]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['tuned', 'tuned-profiles-cpu-partitioning'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 20:38:06 compute-0 sudo[48816]: pam_unix(sudo:session): session closed for user root
Jan 05 20:38:07 compute-0 sudo[48969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivuuqnogzgezflcsrfjzqotoigqqmwff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645487.3508384-218-19003127270608/AnsiballZ_dnf.py'
Jan 05 20:38:07 compute-0 sudo[48969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:38:07 compute-0 python3.9[48971]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['NetworkManager-ovs'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 20:38:09 compute-0 sudo[48969]: pam_unix(sudo:session): session closed for user root
Jan 05 20:38:10 compute-0 sudo[49125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eemqigqabqkrqinupyvbyzvommrvadce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645490.0053155-226-146429933776401/AnsiballZ_dnf.py'
Jan 05 20:38:10 compute-0 sudo[49125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:38:10 compute-0 python3.9[49127]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['os-net-config'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 20:38:14 compute-0 sudo[49125]: pam_unix(sudo:session): session closed for user root
Jan 05 20:38:15 compute-0 sudo[49294]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jxckjhqajxedeiuleseyttrmjtjdaheq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645495.0296419-235-271770703898775/AnsiballZ_dnf.py'
Jan 05 20:38:15 compute-0 sudo[49294]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:38:15 compute-0 python3.9[49296]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openssh-server'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 20:38:16 compute-0 sudo[49294]: pam_unix(sudo:session): session closed for user root
Jan 05 20:38:17 compute-0 sudo[49447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ludlssfinplglkhjjjhanrkswnaxftra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645497.2761035-244-115543091660475/AnsiballZ_dnf.py'
Jan 05 20:38:17 compute-0 sudo[49447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:38:17 compute-0 python3.9[49449]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 20:38:31 compute-0 sudo[49447]: pam_unix(sudo:session): session closed for user root
Jan 05 20:38:31 compute-0 sudo[49792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehxdusvsyuqgprbvokaguvptsxbqknzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645511.351541-253-219153261282523/AnsiballZ_dnf.py'
Jan 05 20:38:31 compute-0 sudo[49792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:38:31 compute-0 python3.9[49794]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['iscsi-initiator-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 20:38:33 compute-0 sudo[49792]: pam_unix(sudo:session): session closed for user root
Jan 05 20:38:34 compute-0 sudo[49948]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vquuaejxjovjnrjlhnisfpykhdlaxozy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645513.7071342-263-136682246042290/AnsiballZ_dnf.py'
Jan 05 20:38:34 compute-0 sudo[49948]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:38:34 compute-0 python3.9[49950]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['device-mapper-multipath'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 20:38:36 compute-0 sudo[49948]: pam_unix(sudo:session): session closed for user root
Jan 05 20:38:36 compute-0 sudo[50105]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebbaiqghgriytspfqecxzmfoogqkfgck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645516.609236-274-228408967911826/AnsiballZ_file.py'
Jan 05 20:38:36 compute-0 sudo[50105]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:38:37 compute-0 python3.9[50107]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:38:37 compute-0 sudo[50105]: pam_unix(sudo:session): session closed for user root
Jan 05 20:38:37 compute-0 sudo[50280]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgefbsukigjdymetbqmxsubxsdchyupd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645517.4388368-282-11742452412977/AnsiballZ_stat.py'
Jan 05 20:38:37 compute-0 sudo[50280]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:38:38 compute-0 python3.9[50282]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:38:38 compute-0 sudo[50280]: pam_unix(sudo:session): session closed for user root
Jan 05 20:38:38 compute-0 sudo[50403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnlujgpekdczldjogjepviugxnypgwca ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645517.4388368-282-11742452412977/AnsiballZ_copy.py'
Jan 05 20:38:38 compute-0 sudo[50403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:38:38 compute-0 python3.9[50405]: ansible-ansible.legacy.copy Invoked with dest=/root/.config/containers/auth.json group=zuul mode=0660 owner=zuul src=/home/zuul/.ansible/tmp/ansible-tmp-1767645517.4388368-282-11742452412977/.source.json _original_basename=.vo3oyc4n follow=False checksum=bf21a9e8fbc5a3846fb05b4fa0859e0917b2202f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:38:38 compute-0 sudo[50403]: pam_unix(sudo:session): session closed for user root
Jan 05 20:38:39 compute-0 sudo[50555]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ukygczrrzociqanqruhvvuoggtskyyrk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645519.264904-300-3778567545177/AnsiballZ_podman_image.py'
Jan 05 20:38:39 compute-0 sudo[50555]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:38:40 compute-0 python3.9[50557]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 05 20:38:40 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:38:42 compute-0 systemd[1]: var-lib-containers-storage-overlay-compat2843945931-lower\x2dmapped.mount: Deactivated successfully.
Jan 05 20:38:46 compute-0 podman[50570]: 2026-01-05 20:38:46.418854048 +0000 UTC m=+6.324503027 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 05 20:38:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:38:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:38:46 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:38:46 compute-0 sudo[50555]: pam_unix(sudo:session): session closed for user root
Jan 05 20:38:47 compute-0 sudo[50867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dckafthdjccwqzodkwmyymuvyyqhqqvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645527.1152325-311-39710594821839/AnsiballZ_podman_image.py'
Jan 05 20:38:47 compute-0 sudo[50867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:38:47 compute-0 python3.9[50869]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 05 20:38:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:38:51 compute-0 sshd-session[49467]: Connection closed by 115.190.103.59 port 54910 [preauth]
Jan 05 20:39:01 compute-0 podman[50881]: 2026-01-05 20:39:01.723620575 +0000 UTC m=+13.937878563 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 20:39:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:01 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:02 compute-0 sudo[50867]: pam_unix(sudo:session): session closed for user root
Jan 05 20:39:02 compute-0 sudo[51177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-twyjvhdzeevkkpmtiswcmyurnlaiuwiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645542.351793-321-190294322020783/AnsiballZ_podman_image.py'
Jan 05 20:39:02 compute-0 sudo[51177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:39:02 compute-0 python3.9[51179]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 05 20:39:03 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:03 compute-0 sshd-session[50946]: Connection closed by 115.190.103.59 port 41226 [preauth]
Jan 05 20:39:22 compute-0 podman[51191]: 2026-01-05 20:39:22.833622506 +0000 UTC m=+19.789569583 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 05 20:39:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:22 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:23 compute-0 sudo[51177]: pam_unix(sudo:session): session closed for user root
Jan 05 20:39:23 compute-0 sudo[51447]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhqgxkicnxjrkvurduweuiltboncofuz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645563.649126-332-140174536098100/AnsiballZ_podman_image.py'
Jan 05 20:39:23 compute-0 sudo[51447]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:39:24 compute-0 python3.9[51449]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 05 20:39:24 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:25 compute-0 sshd-session[51245]: Connection closed by 115.190.103.59 port 58960 [preauth]
Jan 05 20:39:42 compute-0 podman[51461]: 2026-01-05 20:39:42.776482952 +0000 UTC m=+18.511202237 image pull 6e61bfccaf21ee9962f8af7b3bc33737123ae362fb340f43cd517263f3ab794c quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 05 20:39:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:42 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:43 compute-0 sudo[51447]: pam_unix(sudo:session): session closed for user root
Jan 05 20:39:43 compute-0 sudo[51779]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmhpaclrfayrfgbhyceuqrkeojmpaqdw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645583.2266793-332-233357940167669/AnsiballZ_podman_image.py'
Jan 05 20:39:43 compute-0 sudo[51779]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:39:43 compute-0 python3.9[51781]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/prometheus/node-exporter:v1.5.0 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 05 20:39:43 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:45 compute-0 podman[51793]: 2026-01-05 20:39:45.672322012 +0000 UTC m=+1.790893872 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 05 20:39:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:45 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:46 compute-0 sudo[51779]: pam_unix(sudo:session): session closed for user root
Jan 05 20:39:46 compute-0 sudo[52069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kclkuujnehxlayknounjzfloawelzahb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645586.3488545-348-144682780448853/AnsiballZ_podman_image.py'
Jan 05 20:39:46 compute-0 sudo[52069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:39:46 compute-0 python3.9[52071]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 05 20:39:47 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:50 compute-0 podman[52084]: 2026-01-05 20:39:50.721320975 +0000 UTC m=+3.657224208 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 05 20:39:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:50 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:39:51 compute-0 sudo[52069]: pam_unix(sudo:session): session closed for user root
Jan 05 20:39:51 compute-0 sudo[52336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tssfsonpmbkhllevnoyuyumgjtjtftya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645591.2378902-348-37259682923241/AnsiballZ_podman_image.py'
Jan 05 20:39:51 compute-0 sudo[52336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:39:51 compute-0 python3.9[52338]: ansible-containers.podman.podman_image Invoked with auth_file=/root/.config/containers/auth.json name=quay.io/sustainable_computing_io/kepler:release-0.7.12 tag=latest pull=True push=False force=False state=present executable=podman build={'force_rm': False, 'format': 'oci', 'cache': True, 'rm': True, 'annotation': None, 'file': None, 'container_file': None, 'volume': None, 'extra_args': None, 'target': None} push_args={'ssh': None, 'compress': None, 'format': None, 'remove_signatures': None, 'sign_by': None, 'dest': None, 'extra_args': None, 'transport': None} arch=None pull_extra_args=None path=None validate_certs=None username=None password=NOT_LOGGING_PARAMETER ca_cert_dir=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None
Jan 05 20:39:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:40:00 compute-0 podman[52350]: 2026-01-05 20:40:00.405306306 +0000 UTC m=+8.512484791 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 05 20:40:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:40:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:40:00 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:40:00 compute-0 sudo[52336]: pam_unix(sudo:session): session closed for user root
Jan 05 20:40:01 compute-0 sshd-session[45552]: Connection closed by 192.168.122.30 port 58330
Jan 05 20:40:01 compute-0 sshd-session[45549]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:40:01 compute-0 systemd[1]: session-10.scope: Deactivated successfully.
Jan 05 20:40:01 compute-0 systemd[1]: session-10.scope: Consumed 3min 10.140s CPU time.
Jan 05 20:40:01 compute-0 systemd-logind[788]: Session 10 logged out. Waiting for processes to exit.
Jan 05 20:40:01 compute-0 systemd-logind[788]: Removed session 10.
Jan 05 20:40:06 compute-0 sshd-session[52597]: Accepted publickey for zuul from 192.168.122.30 port 60268 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:40:06 compute-0 systemd-logind[788]: New session 11 of user zuul.
Jan 05 20:40:06 compute-0 systemd[1]: Started Session 11 of User zuul.
Jan 05 20:40:06 compute-0 sshd-session[52597]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:40:08 compute-0 python3.9[52750]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:40:09 compute-0 sudo[52904]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wamvsbpxqdoenltzhsrwqbjqnfhpreaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645608.8826008-36-188109946048468/AnsiballZ_getent.py'
Jan 05 20:40:09 compute-0 sudo[52904]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:40:09 compute-0 python3.9[52906]: ansible-ansible.builtin.getent Invoked with database=passwd key=openvswitch fail_key=True service=None split=None
Jan 05 20:40:09 compute-0 sudo[52904]: pam_unix(sudo:session): session closed for user root
Jan 05 20:40:10 compute-0 sudo[53057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kevnrerqdiszaebuolqousmkajvqbuah ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645609.9393961-44-230644657629160/AnsiballZ_group.py'
Jan 05 20:40:10 compute-0 sudo[53057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:40:10 compute-0 python3.9[53059]: ansible-ansible.builtin.group Invoked with gid=42476 name=openvswitch state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 05 20:40:10 compute-0 groupadd[53060]: group added to /etc/group: name=openvswitch, GID=42476
Jan 05 20:40:10 compute-0 groupadd[53060]: group added to /etc/gshadow: name=openvswitch
Jan 05 20:40:10 compute-0 groupadd[53060]: new group: name=openvswitch, GID=42476
Jan 05 20:40:10 compute-0 sudo[53057]: pam_unix(sudo:session): session closed for user root
Jan 05 20:40:11 compute-0 sudo[53215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nddydevtaxostcedzjeqmptcsepgglgv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645610.9448364-52-26471063055925/AnsiballZ_user.py'
Jan 05 20:40:11 compute-0 sudo[53215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:40:11 compute-0 python3.9[53217]: ansible-ansible.builtin.user Invoked with comment=openvswitch user group=openvswitch groups=['hugetlbfs'] name=openvswitch shell=/sbin/nologin state=present uid=42476 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 05 20:40:11 compute-0 useradd[53219]: new user: name=openvswitch, UID=42476, GID=42476, home=/home/openvswitch, shell=/sbin/nologin, from=/dev/pts/0
Jan 05 20:40:11 compute-0 useradd[53219]: add 'openvswitch' to group 'hugetlbfs'
Jan 05 20:40:11 compute-0 useradd[53219]: add 'openvswitch' to shadow group 'hugetlbfs'
Jan 05 20:40:11 compute-0 sudo[53215]: pam_unix(sudo:session): session closed for user root
Jan 05 20:40:12 compute-0 sudo[53375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sshvtzbjozyjdumpvaeyhllmukrdbdyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645612.2645864-62-111150405342375/AnsiballZ_setup.py'
Jan 05 20:40:12 compute-0 sudo[53375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:40:13 compute-0 python3.9[53377]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 20:40:13 compute-0 sudo[53375]: pam_unix(sudo:session): session closed for user root
Jan 05 20:40:13 compute-0 sudo[53459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cxkurmhjroqnoigicmmdkdndblmqimmk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645612.2645864-62-111150405342375/AnsiballZ_dnf.py'
Jan 05 20:40:13 compute-0 sudo[53459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:40:14 compute-0 python3.9[53461]: ansible-ansible.legacy.dnf Invoked with download_only=True name=['openvswitch'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 20:40:16 compute-0 sudo[53459]: pam_unix(sudo:session): session closed for user root
Jan 05 20:40:16 compute-0 sudo[53621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqocttxyikyytyjjydrenewxlfyciolk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645616.440559-76-43922275303150/AnsiballZ_dnf.py'
Jan 05 20:40:16 compute-0 sudo[53621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:40:16 compute-0 python3.9[53623]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:40:30 compute-0 kernel: SELinux:  Converting 2734 SID table entries...
Jan 05 20:40:31 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 20:40:31 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 05 20:40:31 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 20:40:31 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 05 20:40:31 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 20:40:31 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 20:40:31 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 20:40:31 compute-0 groupadd[53646]: group added to /etc/group: name=unbound, GID=994
Jan 05 20:40:31 compute-0 groupadd[53646]: group added to /etc/gshadow: name=unbound
Jan 05 20:40:31 compute-0 groupadd[53646]: new group: name=unbound, GID=994
Jan 05 20:40:31 compute-0 useradd[53653]: new user: name=unbound, UID=993, GID=994, home=/var/lib/unbound, shell=/sbin/nologin, from=none
Jan 05 20:40:31 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=7 res=1
Jan 05 20:40:31 compute-0 systemd[1]: Started daily update of the root trust anchor for DNSSEC.
Jan 05 20:40:33 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 20:40:33 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 05 20:40:33 compute-0 systemd[1]: Reloading.
Jan 05 20:40:33 compute-0 systemd-sysv-generator[54154]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:40:33 compute-0 systemd-rc-local-generator[54148]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:40:33 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 20:40:34 compute-0 sudo[53621]: pam_unix(sudo:session): session closed for user root
Jan 05 20:40:34 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 20:40:34 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 20:40:34 compute-0 systemd[1]: man-db-cache-update.service: Consumed 1.149s CPU time.
Jan 05 20:40:34 compute-0 systemd[1]: run-rf360fac3d3ca4203a7e836bc080b1fd3.service: Deactivated successfully.
Jan 05 20:40:35 compute-0 sudo[54720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fuvwlsfvcwbnlakskasjafieyawcojfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645634.3729677-84-263999549794814/AnsiballZ_systemd.py'
Jan 05 20:40:35 compute-0 sudo[54720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:40:35 compute-0 python3.9[54722]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 05 20:40:35 compute-0 systemd[1]: Reloading.
Jan 05 20:40:35 compute-0 systemd-sysv-generator[54756]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:40:35 compute-0 systemd-rc-local-generator[54753]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:40:35 compute-0 systemd[1]: Starting Open vSwitch Database Unit...
Jan 05 20:40:35 compute-0 chown[54764]: /usr/bin/chown: cannot access '/run/openvswitch': No such file or directory
Jan 05 20:40:35 compute-0 ovs-ctl[54769]: /etc/openvswitch/conf.db does not exist ... (warning).
Jan 05 20:40:35 compute-0 ovs-ctl[54769]: Creating empty database /etc/openvswitch/conf.db [  OK  ]
Jan 05 20:40:36 compute-0 ovs-ctl[54769]: Starting ovsdb-server [  OK  ]
Jan 05 20:40:36 compute-0 ovs-vsctl[54818]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait -- init -- set Open_vSwitch . db-version=8.5.1
Jan 05 20:40:36 compute-0 ovs-vsctl[54834]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait set Open_vSwitch . ovs-version=3.3.5-115.el9s "external-ids:system-id=\"d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b\"" "external-ids:rundir=\"/var/run/openvswitch\"" "system-type=\"centos\"" "system-version=\"9\""
Jan 05 20:40:36 compute-0 ovs-ctl[54769]: Configuring Open vSwitch system IDs [  OK  ]
Jan 05 20:40:36 compute-0 ovs-ctl[54769]: Enabling remote OVSDB managers [  OK  ]
Jan 05 20:40:36 compute-0 systemd[1]: Started Open vSwitch Database Unit.
Jan 05 20:40:36 compute-0 ovs-vsctl[54844]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 05 20:40:36 compute-0 systemd[1]: Starting Open vSwitch Delete Transient Ports...
Jan 05 20:40:36 compute-0 systemd[1]: Finished Open vSwitch Delete Transient Ports.
Jan 05 20:40:36 compute-0 systemd[1]: Starting Open vSwitch Forwarding Unit...
Jan 05 20:40:36 compute-0 kernel: openvswitch: Open vSwitch switching datapath
Jan 05 20:40:36 compute-0 ovs-ctl[54888]: Inserting openvswitch module [  OK  ]
Jan 05 20:40:36 compute-0 ovs-ctl[54857]: Starting ovs-vswitchd [  OK  ]
Jan 05 20:40:36 compute-0 ovs-vsctl[54907]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --no-wait add Open_vSwitch . external-ids hostname=compute-0
Jan 05 20:40:36 compute-0 ovs-ctl[54857]: Enabling remote OVSDB managers [  OK  ]
Jan 05 20:40:36 compute-0 systemd[1]: Started Open vSwitch Forwarding Unit.
Jan 05 20:40:36 compute-0 systemd[1]: Starting Open vSwitch...
Jan 05 20:40:36 compute-0 systemd[1]: Finished Open vSwitch.
Jan 05 20:40:36 compute-0 sudo[54720]: pam_unix(sudo:session): session closed for user root
Jan 05 20:40:37 compute-0 python3.9[55058]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:40:38 compute-0 sudo[55208]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqyzbovxfmsrmlqquifgdsksxbonfkoi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645637.9104693-102-57876824493974/AnsiballZ_sefcontext.py'
Jan 05 20:40:38 compute-0 sudo[55208]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:40:38 compute-0 python3.9[55210]: ansible-community.general.sefcontext Invoked with selevel=s0 setype=container_file_t state=present target=/var/lib/edpm-config(/.*)? ignore_selinux_state=False ftype=a reload=True substitute=None seuser=None
Jan 05 20:40:40 compute-0 kernel: SELinux:  Converting 2748 SID table entries...
Jan 05 20:40:40 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 20:40:40 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 05 20:40:40 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 20:40:40 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 05 20:40:40 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 20:40:40 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 20:40:40 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 20:40:40 compute-0 sudo[55208]: pam_unix(sudo:session): session closed for user root
Jan 05 20:40:41 compute-0 python3.9[55365]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local', 'distribution'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:40:42 compute-0 sudo[55521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhdzztgyglhgfnjrwkvcuvbmlkbkyifl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645641.9705384-120-233389627669359/AnsiballZ_dnf.py'
Jan 05 20:40:42 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=8 res=1
Jan 05 20:40:42 compute-0 sudo[55521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:40:42 compute-0 python3.9[55523]: ansible-ansible.legacy.dnf Invoked with name=['driverctl', 'lvm2', 'crudini', 'jq', 'nftables', 'NetworkManager', 'openstack-selinux', 'python3-libselinux', 'python3-pyyaml', 'rsync', 'tmpwatch', 'sysstat', 'iproute-tc', 'ksmtuned', 'systemd-container', 'crypto-policies-scripts', 'grubby', 'sos'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:40:43 compute-0 sudo[55521]: pam_unix(sudo:session): session closed for user root
Jan 05 20:40:44 compute-0 sudo[55674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzfzcxdndnzjncfxnimmghtyacedhdol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645644.0440693-128-89457398771908/AnsiballZ_command.py'
Jan 05 20:40:44 compute-0 sudo[55674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:40:44 compute-0 python3.9[55676]: ansible-ansible.legacy.command Invoked with _raw_params=rpm -V driverctl lvm2 crudini jq nftables NetworkManager openstack-selinux python3-libselinux python3-pyyaml rsync tmpwatch sysstat iproute-tc ksmtuned systemd-container crypto-policies-scripts grubby sos _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:40:45 compute-0 sudo[55674]: pam_unix(sudo:session): session closed for user root
Jan 05 20:40:46 compute-0 sudo[55961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soqolocdijciitjegyvzsdhwygtyimhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645645.902448-136-77290966765604/AnsiballZ_file.py'
Jan 05 20:40:46 compute-0 sudo[55961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:40:46 compute-0 python3.9[55963]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config selevel=s0 setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None attributes=None
Jan 05 20:40:46 compute-0 sudo[55961]: pam_unix(sudo:session): session closed for user root
Jan 05 20:40:47 compute-0 python3.9[56113]: ansible-ansible.builtin.stat Invoked with path=/etc/cloud/cloud.cfg.d follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:40:48 compute-0 sudo[56265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwuirhlpdvalbmfpclwszokstlhrnoxe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645648.0296404-152-24741804493928/AnsiballZ_dnf.py'
Jan 05 20:40:48 compute-0 sudo[56265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:40:48 compute-0 python3.9[56267]: ansible-ansible.legacy.dnf Invoked with name=['NetworkManager-ovs'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:40:50 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 20:40:50 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 05 20:40:50 compute-0 systemd[1]: Reloading.
Jan 05 20:40:50 compute-0 systemd-sysv-generator[56309]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:40:50 compute-0 systemd-rc-local-generator[56306]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:40:51 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 20:40:51 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 20:40:51 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 20:40:51 compute-0 systemd[1]: run-r5c54674d60ae4e90ab89eec73bd8328e.service: Deactivated successfully.
Jan 05 20:40:51 compute-0 sudo[56265]: pam_unix(sudo:session): session closed for user root
Jan 05 20:40:52 compute-0 sudo[56583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnxeqaycijlczyccingigdnstrpeepir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645651.704474-160-76671187566656/AnsiballZ_systemd.py'
Jan 05 20:40:52 compute-0 sudo[56583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:40:52 compute-0 python3.9[56585]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:40:52 compute-0 systemd[1]: NetworkManager-wait-online.service: Deactivated successfully.
Jan 05 20:40:52 compute-0 systemd[1]: Stopped Network Manager Wait Online.
Jan 05 20:40:52 compute-0 systemd[1]: Stopping Network Manager Wait Online...
Jan 05 20:40:52 compute-0 systemd[1]: Stopping Network Manager...
Jan 05 20:40:52 compute-0 NetworkManager[7183]: <info>  [1767645652.4229] caught SIGTERM, shutting down normally.
Jan 05 20:40:52 compute-0 NetworkManager[7183]: <info>  [1767645652.4245] dhcp4 (eth0): canceled DHCP transaction
Jan 05 20:40:52 compute-0 NetworkManager[7183]: <info>  [1767645652.4245] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 05 20:40:52 compute-0 NetworkManager[7183]: <info>  [1767645652.4245] dhcp4 (eth0): state changed no lease
Jan 05 20:40:52 compute-0 NetworkManager[7183]: <info>  [1767645652.4248] manager: NetworkManager state is now CONNECTED_SITE
Jan 05 20:40:52 compute-0 NetworkManager[7183]: <info>  [1767645652.6735] exiting (success)
Jan 05 20:40:52 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 05 20:40:52 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 05 20:40:52 compute-0 systemd[1]: NetworkManager.service: Deactivated successfully.
Jan 05 20:40:52 compute-0 systemd[1]: Stopped Network Manager.
Jan 05 20:40:52 compute-0 systemd[1]: NetworkManager.service: Consumed 20.046s CPU time, 4.3M memory peak, read 0B from disk, written 37.0K to disk.
Jan 05 20:40:52 compute-0 systemd[1]: Starting Network Manager...
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.7885] NetworkManager (version 1.54.2-1.el9) is starting... (after a restart, boot:a742f362-63b2-484d-bd96-34f7a12572fa)
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.7889] Read config: /etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.7968] manager[0x556b0511f000]: monitoring kernel firmware directory '/lib/firmware'.
Jan 05 20:40:52 compute-0 systemd[1]: Starting Hostname Service...
Jan 05 20:40:52 compute-0 systemd[1]: Started Hostname Service.
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9117] hostname: hostname: using hostnamed
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9118] hostname: static hostname changed from (none) to "compute-0"
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9125] dns-mgr: init: dns=default,systemd-resolved rc-manager=symlink (auto)
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9134] manager[0x556b0511f000]: rfkill: Wi-Fi hardware radio set enabled
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9134] manager[0x556b0511f000]: rfkill: WWAN hardware radio set enabled
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9171] Loaded device plugin: NMOvsFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-ovs.so)
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9186] Loaded device plugin: NMTeamFactory (/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-device-plugin-team.so)
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9187] manager: rfkill: Wi-Fi enabled by radio killswitch; enabled by state file
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9188] manager: rfkill: WWAN enabled by radio killswitch; enabled by state file
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9189] manager: Networking is enabled by state file
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9194] settings: Loaded settings plugin: keyfile (internal)
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9204] settings: Loaded settings plugin: ifcfg-rh ("/usr/lib64/NetworkManager/1.54.2-1.el9/libnm-settings-plugin-ifcfg-rh.so")
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9231] Warning: the ifcfg-rh plugin is deprecated, please migrate connections to the keyfile format using "nmcli connection migrate"
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9242] dhcp: init: Using DHCP client 'internal'
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9245] manager: (lo): new Loopback device (/org/freedesktop/NetworkManager/Devices/1)
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9250] device (lo): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9255] device (lo): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'external')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9263] device (lo): Activation: starting connection 'lo' (13386405-8334-4b8c-b612-8be49be697c2)
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9270] device (eth0): carrier: link connected
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9273] manager: (eth0): new Ethernet device (/org/freedesktop/NetworkManager/Devices/2)
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9277] manager: (eth0): assume: will attempt to assume matching connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03) (indicated)
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9277] device (eth0): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9284] device (eth0): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9290] device (eth0): Activation: starting connection 'System eth0' (5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03)
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9294] device (eth1): carrier: link connected
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9298] manager: (eth1): new Ethernet device (/org/freedesktop/NetworkManager/Devices/3)
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9303] manager: (eth1): assume: will attempt to assume matching connection 'ci-private-network' (b2147c2e-bb86-524a-bb40-29a4bf6eda54) (indicated)
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9304] device (eth1): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'assume')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9308] device (eth1): state change: unavailable -> disconnected (reason 'connection-assumed', managed-type: 'assume')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9316] device (eth1): Activation: starting connection 'ci-private-network' (b2147c2e-bb86-524a-bb40-29a4bf6eda54)
Jan 05 20:40:52 compute-0 systemd[1]: Started Network Manager.
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9332] bus-manager: acquired D-Bus service "org.freedesktop.NetworkManager"
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9339] device (lo): state change: disconnected -> prepare (reason 'none', managed-type: 'external')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9341] device (lo): state change: prepare -> config (reason 'none', managed-type: 'external')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9343] device (lo): state change: config -> ip-config (reason 'none', managed-type: 'external')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9345] device (eth0): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9347] device (eth0): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9355] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'assume')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9358] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'assume')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9360] device (lo): state change: ip-config -> ip-check (reason 'none', managed-type: 'external')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9365] device (eth0): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9368] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9378] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'assume')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9389] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9417] device (lo): state change: ip-check -> secondaries (reason 'none', managed-type: 'external')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9419] device (lo): state change: secondaries -> activated (reason 'none', managed-type: 'external')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9423] device (lo): Activation: successful, device activated.
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9430] dhcp4 (eth0): state changed new lease, address=38.102.83.179
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9435] policy: set 'System eth0' (eth0) as default for IPv4 routing and DNS
Jan 05 20:40:52 compute-0 systemd[1]: Starting Network Manager Wait Online...
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9504] device (eth0): state change: ip-config -> ip-check (reason 'none', managed-type: 'assume')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9512] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9514] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9517] manager: NetworkManager state is now CONNECTED_LOCAL
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9520] device (eth1): Activation: successful, device activated.
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9542] device (eth0): state change: ip-check -> secondaries (reason 'none', managed-type: 'assume')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9549] device (eth0): state change: secondaries -> activated (reason 'none', managed-type: 'assume')
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9554] manager: NetworkManager state is now CONNECTED_SITE
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9558] device (eth0): Activation: successful, device activated.
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9565] manager: NetworkManager state is now CONNECTED_GLOBAL
Jan 05 20:40:52 compute-0 NetworkManager[56598]: <info>  [1767645652.9567] manager: startup complete
Jan 05 20:40:52 compute-0 systemd[1]: Finished Network Manager Wait Online.
Jan 05 20:40:52 compute-0 sudo[56583]: pam_unix(sudo:session): session closed for user root
Jan 05 20:40:53 compute-0 sudo[56810]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlmeafgfcnxksdxhfhpjaybzymjbryff ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645653.2290196-168-75056872174619/AnsiballZ_dnf.py'
Jan 05 20:40:53 compute-0 sudo[56810]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:40:53 compute-0 python3.9[56812]: ansible-ansible.legacy.dnf Invoked with name=['os-net-config'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:40:58 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 20:40:58 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 05 20:40:58 compute-0 systemd[1]: Reloading.
Jan 05 20:40:58 compute-0 systemd-rc-local-generator[56868]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:40:58 compute-0 systemd-sysv-generator[56871]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:40:58 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 20:40:59 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 20:40:59 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 20:40:59 compute-0 systemd[1]: run-r877771630ba5434db544d87a7c7f7139.service: Deactivated successfully.
Jan 05 20:40:59 compute-0 sudo[56810]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:00 compute-0 sudo[57272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vypphwqveconaccpvhmahwndntwqkdsx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645660.430015-180-60311309087938/AnsiballZ_stat.py'
Jan 05 20:41:00 compute-0 sudo[57272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:01 compute-0 python3.9[57274]: ansible-ansible.builtin.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:41:01 compute-0 sudo[57272]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:01 compute-0 sudo[57424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fboavzxcxkcjvwfznfznmtwecqljnpzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645661.3466177-189-495156966871/AnsiballZ_ini_file.py'
Jan 05 20:41:01 compute-0 sudo[57424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:02 compute-0 python3.9[57426]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=no-auto-default path=/etc/NetworkManager/NetworkManager.conf section=main state=present value=* exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:41:02 compute-0 sudo[57424]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:02 compute-0 sudo[57578]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opnqexrqxvhkcjwsusxxbkxiultceluf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645662.4363103-199-216896597926885/AnsiballZ_ini_file.py'
Jan 05 20:41:02 compute-0 sudo[57578]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:03 compute-0 python3.9[57580]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:41:03 compute-0 sudo[57578]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:03 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 05 20:41:03 compute-0 sudo[57730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjdrpvcwodzahyfxkcqitshikmljfyae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645663.2259054-199-173661416849513/AnsiballZ_ini_file.py'
Jan 05 20:41:03 compute-0 sudo[57730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:03 compute-0 python3.9[57732]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=dns path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=none exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:41:03 compute-0 sudo[57730]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:04 compute-0 sudo[57882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-druxdmynruroevgudyejdaaybdgsdxyj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645664.011085-214-276552617001576/AnsiballZ_ini_file.py'
Jan 05 20:41:04 compute-0 sudo[57882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:04 compute-0 python3.9[57884]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/NetworkManager.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:41:04 compute-0 sudo[57882]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:05 compute-0 sudo[58034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yyukmckzkpneybbemxapaxyxpcidmnzf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645664.846281-214-19378889859851/AnsiballZ_ini_file.py'
Jan 05 20:41:05 compute-0 sudo[58034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:05 compute-0 python3.9[58036]: ansible-community.general.ini_file Invoked with backup=True mode=0644 no_extra_spaces=True option=rc-manager path=/etc/NetworkManager/conf.d/99-cloud-init.conf section=main state=absent value=unmanaged exclusive=True ignore_spaces=False allow_no_value=False modify_inactive_option=True create=True follow=False unsafe_writes=False section_has_values=None values=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:41:05 compute-0 sudo[58034]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:05 compute-0 sudo[58186]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfpxaxkqoaphidaaktmtmotweudzcaxx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645665.6128974-229-21482831520758/AnsiballZ_stat.py'
Jan 05 20:41:06 compute-0 sudo[58186]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:06 compute-0 python3.9[58188]: ansible-ansible.legacy.stat Invoked with path=/etc/dhcp/dhclient-enter-hooks follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:41:06 compute-0 sudo[58186]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:06 compute-0 sudo[58309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phwlcbtobqneayslqsmwzmuddrndlbwa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645665.6128974-229-21482831520758/AnsiballZ_copy.py'
Jan 05 20:41:06 compute-0 sudo[58309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:06 compute-0 python3.9[58311]: ansible-ansible.legacy.copy Invoked with dest=/etc/dhcp/dhclient-enter-hooks mode=0755 src=/home/zuul/.ansible/tmp/ansible-tmp-1767645665.6128974-229-21482831520758/.source _original_basename=.4njc815z follow=False checksum=f6278a40de79a9841f6ed1fc584538225566990c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:41:06 compute-0 sudo[58309]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:07 compute-0 sudo[58461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avxkkbappymujgfktalmcrwklxkpdtla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645667.2377062-244-21754682794909/AnsiballZ_file.py'
Jan 05 20:41:07 compute-0 sudo[58461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:07 compute-0 python3.9[58463]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/os-net-config state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:41:07 compute-0 sudo[58461]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:08 compute-0 sudo[58613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aovfdffxnbynihxrusijahyvefzedljz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645668.120409-252-156714779543311/AnsiballZ_edpm_os_net_config_mappings.py'
Jan 05 20:41:08 compute-0 sudo[58613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:08 compute-0 python3.9[58615]: ansible-edpm_os_net_config_mappings Invoked with net_config_data_lookup={}
Jan 05 20:41:08 compute-0 sudo[58613]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:09 compute-0 sudo[58765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-derlctuervltjqdibitluybccwcjnajp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645669.2027903-261-157962450602026/AnsiballZ_file.py'
Jan 05 20:41:09 compute-0 sudo[58765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:09 compute-0 python3.9[58767]: ansible-ansible.builtin.file Invoked with path=/var/lib/edpm-config/scripts state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:41:09 compute-0 sudo[58765]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:10 compute-0 sudo[58917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hkhncbxnwlywqvvmvyamprrcasfmcsvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645670.0339842-271-30269001711875/AnsiballZ_stat.py'
Jan 05 20:41:10 compute-0 sudo[58917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:10 compute-0 sudo[58917]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:11 compute-0 sudo[59040]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfxxcqgbbowsygnmcnomwrzvnmegepxz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645670.0339842-271-30269001711875/AnsiballZ_copy.py'
Jan 05 20:41:11 compute-0 sudo[59040]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:11 compute-0 sudo[59040]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:12 compute-0 sudo[59192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jdbiiqxuraazouobqkbmwdbpharovboj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645671.541982-286-225771088918803/AnsiballZ_slurp.py'
Jan 05 20:41:12 compute-0 sudo[59192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:12 compute-0 python3.9[59194]: ansible-ansible.builtin.slurp Invoked with path=/etc/os-net-config/config.yaml src=/etc/os-net-config/config.yaml
Jan 05 20:41:12 compute-0 sudo[59192]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:13 compute-0 sudo[59367]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eidmpzlfguxfvnximcwjefxwsluwgyip ; ANSIBLE_ASYNC_DIR=\'~/.ansible_async\' /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645672.5672228-295-38511224058340/async_wrapper.py j908814709289 300 /home/zuul/.ansible/tmp/ansible-tmp-1767645672.5672228-295-38511224058340/AnsiballZ_edpm_os_net_config.py _'
Jan 05 20:41:13 compute-0 sudo[59367]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:13 compute-0 ansible-async_wrapper.py[59369]: Invoked with j908814709289 300 /home/zuul/.ansible/tmp/ansible-tmp-1767645672.5672228-295-38511224058340/AnsiballZ_edpm_os_net_config.py _
Jan 05 20:41:13 compute-0 ansible-async_wrapper.py[59372]: Starting module and watcher
Jan 05 20:41:13 compute-0 ansible-async_wrapper.py[59372]: Start watching 59373 (300)
Jan 05 20:41:13 compute-0 ansible-async_wrapper.py[59373]: Start module (59373)
Jan 05 20:41:13 compute-0 ansible-async_wrapper.py[59369]: Return async_wrapper task started.
Jan 05 20:41:13 compute-0 sudo[59367]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:13 compute-0 python3.9[59374]: ansible-edpm_os_net_config Invoked with cleanup=True config_file=/etc/os-net-config/config.yaml debug=True detailed_exit_codes=True safe_defaults=False use_nmstate=True
Jan 05 20:41:14 compute-0 kernel: cfg80211: Loading compiled-in X.509 certificates for regulatory database
Jan 05 20:41:14 compute-0 kernel: Loaded X.509 cert 'sforshee: 00b28ddf47aef9cea7'
Jan 05 20:41:14 compute-0 kernel: Loaded X.509 cert 'wens: 61c038651aabdcf94bd0ac7ff06c7248db18c600'
Jan 05 20:41:14 compute-0 kernel: platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
Jan 05 20:41:14 compute-0 kernel: cfg80211: failed to load regulatory.db
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.8539] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59375 uid=0 result="success"
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.8567] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59375 uid=0 result="success"
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9393] manager: (br-ex): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/4)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9396] audit: op="connection-add" uuid="1df2e95c-8d85-438f-83ff-4168a2993a9d" name="br-ex-br" pid=59375 uid=0 result="success"
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9421] manager: (br-ex): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/5)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9424] audit: op="connection-add" uuid="d1d8abf9-27b1-4b05-b581-13df35853731" name="br-ex-port" pid=59375 uid=0 result="success"
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9444] manager: (eth1): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/6)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9447] audit: op="connection-add" uuid="4d602bc8-27b1-492d-ad43-e5fa442cd25a" name="eth1-port" pid=59375 uid=0 result="success"
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9468] manager: (vlan20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/7)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9471] audit: op="connection-add" uuid="1c684d13-7aa9-47bf-9d02-0d3a1772843f" name="vlan20-port" pid=59375 uid=0 result="success"
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9491] manager: (vlan21): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/8)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9494] audit: op="connection-add" uuid="3a878a52-bea0-41ef-8db0-aba78abd407c" name="vlan21-port" pid=59375 uid=0 result="success"
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9519] manager: (vlan22): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/9)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9522] audit: op="connection-add" uuid="951f5346-9279-4366-b1f2-8e19427b2298" name="vlan22-port" pid=59375 uid=0 result="success"
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9557] audit: op="connection-update" uuid="5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03" name="System eth0" args="802-3-ethernet.mtu,connection.timestamp,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.dhcp-timeout,ipv6.method,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=59375 uid=0 result="success"
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9586] manager: (br-ex): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/10)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9589] audit: op="connection-add" uuid="8ab6299c-0d62-40e8-93e9-ef40d556025c" name="br-ex-if" pid=59375 uid=0 result="success"
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9681] audit: op="connection-update" uuid="b2147c2e-bb86-524a-bb40-29a4bf6eda54" name="ci-private-network" args="connection.port-type,connection.timestamp,connection.controller,connection.master,connection.slave-type,ipv6.addr-gen-mode,ipv6.addresses,ipv6.routing-rules,ipv6.dns,ipv6.method,ipv6.routes,ipv4.addresses,ipv4.dns,ipv4.routing-rules,ipv4.never-default,ipv4.method,ipv4.routes,ovs-external-ids.data,ovs-interface.type" pid=59375 uid=0 result="success"
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9714] manager: (vlan20): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/11)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9716] audit: op="connection-add" uuid="5dd8708c-7c0a-47cd-aa7f-2ce30d70464e" name="vlan20-if" pid=59375 uid=0 result="success"
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9746] manager: (vlan21): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/12)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9749] audit: op="connection-add" uuid="60ad8bb3-a3c3-4da9-bf8f-d8128c7b9e30" name="vlan21-if" pid=59375 uid=0 result="success"
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9779] manager: (vlan22): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/13)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9783] audit: op="connection-add" uuid="86aa67be-9e14-433a-8300-f6b2c1a3c439" name="vlan22-if" pid=59375 uid=0 result="success"
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9800] audit: op="connection-delete" uuid="4f272fd3-0f0d-3c27-be08-0346479a4132" name="Wired connection 1" pid=59375 uid=0 result="success"
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9824] device (br-ex)[Open vSwitch Bridge]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <warn>  [1767645675.9828] device (br-ex)[Open vSwitch Bridge]: error setting IPv4 forwarding to '1': Success
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9844] device (br-ex)[Open vSwitch Bridge]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9852] device (br-ex)[Open vSwitch Bridge]: Activation: starting connection 'br-ex-br' (1df2e95c-8d85-438f-83ff-4168a2993a9d)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9854] audit: op="connection-activate" uuid="1df2e95c-8d85-438f-83ff-4168a2993a9d" name="br-ex-br" pid=59375 uid=0 result="success"
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9857] device (br-ex)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <warn>  [1767645675.9859] device (br-ex)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9870] device (br-ex)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9879] device (br-ex)[Open vSwitch Port]: Activation: starting connection 'br-ex-port' (d1d8abf9-27b1-4b05-b581-13df35853731)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9883] device (eth1)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <warn>  [1767645675.9885] device (eth1)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9895] device (eth1)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9902] device (eth1)[Open vSwitch Port]: Activation: starting connection 'eth1-port' (4d602bc8-27b1-492d-ad43-e5fa442cd25a)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9906] device (vlan20)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <warn>  [1767645675.9907] device (vlan20)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9916] device (vlan20)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9923] device (vlan20)[Open vSwitch Port]: Activation: starting connection 'vlan20-port' (1c684d13-7aa9-47bf-9d02-0d3a1772843f)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9925] device (vlan21)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <warn>  [1767645675.9927] device (vlan21)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9935] device (vlan21)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9941] device (vlan21)[Open vSwitch Port]: Activation: starting connection 'vlan21-port' (3a878a52-bea0-41ef-8db0-aba78abd407c)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9944] device (vlan22)[Open vSwitch Port]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <warn>  [1767645675.9945] device (vlan22)[Open vSwitch Port]: error setting IPv4 forwarding to '1': Resource temporarily unavailable
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9954] device (vlan22)[Open vSwitch Port]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9960] device (vlan22)[Open vSwitch Port]: Activation: starting connection 'vlan22-port' (951f5346-9279-4366-b1f2-8e19427b2298)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9962] device (br-ex)[Open vSwitch Bridge]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9966] device (br-ex)[Open vSwitch Bridge]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9968] device (br-ex)[Open vSwitch Bridge]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9976] device (br-ex)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <warn>  [1767645675.9978] device (br-ex)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9981] device (br-ex)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9986] device (br-ex)[Open vSwitch Interface]: Activation: starting connection 'br-ex-if' (8ab6299c-0d62-40e8-93e9-ef40d556025c)
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9987] device (br-ex)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9992] device (br-ex)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9995] device (br-ex)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9996] device (br-ex)[Open vSwitch Port]: Activation: connection 'br-ex-port' attached as port, continuing activation
Jan 05 20:41:15 compute-0 NetworkManager[56598]: <info>  [1767645675.9997] device (eth1): state change: activated -> deactivating (reason 'new-activation', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0010] device (eth1): disconnecting for new activation request.
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0011] device (eth1)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0015] device (eth1)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0018] device (eth1)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0019] device (eth1)[Open vSwitch Port]: Activation: connection 'eth1-port' attached as port, continuing activation
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0023] device (vlan20)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <warn>  [1767645676.0025] device (vlan20)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0028] device (vlan20)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0033] device (vlan20)[Open vSwitch Interface]: Activation: starting connection 'vlan20-if' (5dd8708c-7c0a-47cd-aa7f-2ce30d70464e)
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0034] device (vlan20)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0038] device (vlan20)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0040] device (vlan20)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0042] device (vlan20)[Open vSwitch Port]: Activation: connection 'vlan20-port' attached as port, continuing activation
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0045] device (vlan21)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <warn>  [1767645676.0047] device (vlan21)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0050] device (vlan21)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0056] device (vlan21)[Open vSwitch Interface]: Activation: starting connection 'vlan21-if' (60ad8bb3-a3c3-4da9-bf8f-d8128c7b9e30)
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0056] device (vlan21)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0060] device (vlan21)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0062] device (vlan21)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0064] device (vlan21)[Open vSwitch Port]: Activation: connection 'vlan21-port' attached as port, continuing activation
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0068] device (vlan22)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <warn>  [1767645676.0069] device (vlan22)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0073] device (vlan22)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'user-requested', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0078] device (vlan22)[Open vSwitch Interface]: Activation: starting connection 'vlan22-if' (86aa67be-9e14-433a-8300-f6b2c1a3c439)
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0079] device (vlan22)[Open vSwitch Port]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0083] device (vlan22)[Open vSwitch Port]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0085] device (vlan22)[Open vSwitch Port]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0087] device (vlan22)[Open vSwitch Port]: Activation: connection 'vlan22-port' attached as port, continuing activation
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0089] device (br-ex)[Open vSwitch Bridge]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0104] audit: op="device-reapply" interface="eth0" ifindex=2 args="802-3-ethernet.mtu,connection.autoconnect-priority,ipv6.addr-gen-mode,ipv6.method,ipv4.dhcp-timeout,ipv4.dhcp-client-id" pid=59375 uid=0 result="success"
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0107] device (br-ex)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0111] device (br-ex)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0114] device (br-ex)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0123] device (br-ex)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0128] device (eth1)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0134] device (vlan20)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0137] device (vlan20)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0140] device (vlan20)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0145] device (vlan20)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0150] device (vlan21)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 kernel: ovs-system: entered promiscuous mode
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0154] device (vlan21)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0156] device (vlan21)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0163] device (vlan21)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0168] device (vlan22)[Open vSwitch Interface]: state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0172] device (vlan22)[Open vSwitch Interface]: state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0175] device (vlan22)[Open vSwitch Interface]: state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0181] device (vlan22)[Open vSwitch Port]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0187] dhcp4 (eth0): canceled DHCP transaction
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0187] dhcp4 (eth0): activation: beginning transaction (timeout in 45 seconds)
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0187] dhcp4 (eth0): state changed no lease
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0190] dhcp4 (eth0): activation: beginning transaction (no timeout)
Jan 05 20:41:16 compute-0 systemd-udevd[59381]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 20:41:16 compute-0 kernel: Timeout policy base is empty
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0202] device (br-ex)[Open vSwitch Interface]: Activation: connection 'br-ex-if' attached as port, continuing activation
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0206] audit: op="device-reapply" interface="eth1" ifindex=3 pid=59375 uid=0 result="fail" reason="Device is not activated"
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0248] device (vlan20)[Open vSwitch Interface]: Activation: connection 'vlan20-if' attached as port, continuing activation
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0257] dhcp4 (eth0): state changed new lease, address=38.102.83.179
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0263] device (vlan21)[Open vSwitch Interface]: Activation: connection 'vlan21-if' attached as port, continuing activation
Jan 05 20:41:16 compute-0 systemd[1]: Starting Network Manager Script Dispatcher Service...
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0338] device (eth1): disconnecting for new activation request.
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0339] audit: op="connection-activate" uuid="b2147c2e-bb86-524a-bb40-29a4bf6eda54" name="ci-private-network" pid=59375 uid=0 result="success"
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0346] device (vlan22)[Open vSwitch Interface]: Activation: connection 'vlan22-if' attached as port, continuing activation
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0375] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59375 uid=0 result="success"
Jan 05 20:41:16 compute-0 systemd[1]: Started Network Manager Script Dispatcher Service.
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0493] device (eth1): state change: deactivating -> disconnected (reason 'new-activation', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0646] device (eth1): Activation: starting connection 'ci-private-network' (b2147c2e-bb86-524a-bb40-29a4bf6eda54)
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0653] device (br-ex)[Open vSwitch Bridge]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0663] device (eth1): state change: disconnected -> prepare (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0667] device (eth1): state change: prepare -> config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0677] device (br-ex)[Open vSwitch Bridge]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0681] device (br-ex)[Open vSwitch Bridge]: Activation: successful, device activated.
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0687] device (br-ex)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0688] device (eth1)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0689] device (vlan20)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0690] device (vlan21)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0692] device (vlan22)[Open vSwitch Port]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0714] device (eth1): state change: config -> ip-config (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0721] device (br-ex)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0725] device (br-ex)[Open vSwitch Port]: Activation: successful, device activated.
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0729] device (eth1)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0732] device (eth1)[Open vSwitch Port]: Activation: successful, device activated.
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0735] device (vlan20)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0738] device (vlan20)[Open vSwitch Port]: Activation: successful, device activated.
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0741] device (vlan21)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0744] device (vlan21)[Open vSwitch Port]: Activation: successful, device activated.
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0747] device (vlan22)[Open vSwitch Port]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0750] device (vlan22)[Open vSwitch Port]: Activation: successful, device activated.
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0755] device (eth1): Activation: connection 'ci-private-network' attached as port, continuing activation
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0762] device (eth1): state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 kernel: br-ex: entered promiscuous mode
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0832] device (eth1): state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0838] device (eth1): state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0848] device (eth1): Activation: successful, device activated.
Jan 05 20:41:16 compute-0 kernel: vlan22: entered promiscuous mode
Jan 05 20:41:16 compute-0 kernel: virtio_net virtio5 eth1: entered promiscuous mode
Jan 05 20:41:16 compute-0 systemd-udevd[59380]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0943] device (br-ex)[Open vSwitch Interface]: carrier: link connected
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.0958] device (br-ex)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 kernel: vlan21: entered promiscuous mode
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1024] device (br-ex)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1026] device (br-ex)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1030] device (br-ex)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 05 20:41:16 compute-0 kernel: vlan20: entered promiscuous mode
Jan 05 20:41:16 compute-0 systemd-udevd[59379]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1213] device (vlan21)[Open vSwitch Interface]: carrier: link connected
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1216] device (vlan22)[Open vSwitch Interface]: carrier: link connected
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1240] device (vlan21)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1248] device (vlan22)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1287] device (vlan21)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1289] device (vlan21)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1297] device (vlan21)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1310] device (vlan22)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1312] device (vlan22)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1319] device (vlan22)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1374] device (vlan20)[Open vSwitch Interface]: carrier: link connected
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1390] device (vlan20)[Open vSwitch Interface]: state change: ip-config -> ip-check (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1424] device (vlan20)[Open vSwitch Interface]: state change: ip-check -> secondaries (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1431] device (vlan20)[Open vSwitch Interface]: state change: secondaries -> activated (reason 'none', managed-type: 'full')
Jan 05 20:41:16 compute-0 NetworkManager[56598]: <info>  [1767645676.1438] device (vlan20)[Open vSwitch Interface]: Activation: successful, device activated.
Jan 05 20:41:17 compute-0 sudo[59705]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-igtxpzkkswdgzdrbeccoignzzlhmmrri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645676.7054741-295-191352295625839/AnsiballZ_async_status.py'
Jan 05 20:41:17 compute-0 sudo[59705]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:17 compute-0 NetworkManager[56598]: <info>  [1767645677.3100] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59375 uid=0 result="success"
Jan 05 20:41:17 compute-0 python3.9[59707]: ansible-ansible.legacy.async_status Invoked with jid=j908814709289.59369 mode=status _async_dir=/root/.ansible_async
Jan 05 20:41:17 compute-0 sudo[59705]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:17 compute-0 NetworkManager[56598]: <info>  [1767645677.5400] checkpoint[0x556b050f5950]: destroy /org/freedesktop/NetworkManager/Checkpoint/1
Jan 05 20:41:17 compute-0 NetworkManager[56598]: <info>  [1767645677.5404] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/1" pid=59375 uid=0 result="success"
Jan 05 20:41:17 compute-0 NetworkManager[56598]: <info>  [1767645677.8661] audit: op="checkpoint-create" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59375 uid=0 result="success"
Jan 05 20:41:17 compute-0 NetworkManager[56598]: <info>  [1767645677.8678] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59375 uid=0 result="success"
Jan 05 20:41:18 compute-0 NetworkManager[56598]: <info>  [1767645678.0838] audit: op="networking-control" arg="global-dns-configuration" pid=59375 uid=0 result="success"
Jan 05 20:41:18 compute-0 NetworkManager[56598]: <info>  [1767645678.0870] config: signal: SET_VALUES,values,values-intern,global-dns-config (/etc/NetworkManager/NetworkManager.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf)
Jan 05 20:41:18 compute-0 NetworkManager[56598]: <info>  [1767645678.0907] audit: op="networking-control" arg="global-dns-configuration" pid=59375 uid=0 result="success"
Jan 05 20:41:18 compute-0 NetworkManager[56598]: <info>  [1767645678.0937] audit: op="checkpoint-adjust-rollback-timeout" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59375 uid=0 result="success"
Jan 05 20:41:18 compute-0 NetworkManager[56598]: <info>  [1767645678.2807] checkpoint[0x556b050f5a20]: destroy /org/freedesktop/NetworkManager/Checkpoint/2
Jan 05 20:41:18 compute-0 NetworkManager[56598]: <info>  [1767645678.2812] audit: op="checkpoint-destroy" arg="/org/freedesktop/NetworkManager/Checkpoint/2" pid=59375 uid=0 result="success"
Jan 05 20:41:18 compute-0 ansible-async_wrapper.py[59373]: Module complete (59373)
Jan 05 20:41:18 compute-0 ansible-async_wrapper.py[59372]: Done in kid B.
Jan 05 20:41:20 compute-0 sudo[59812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fgxrqvhjsitoqlsjcjliyutyhhztpgoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645676.7054741-295-191352295625839/AnsiballZ_async_status.py'
Jan 05 20:41:20 compute-0 sudo[59812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:21 compute-0 python3.9[59814]: ansible-ansible.legacy.async_status Invoked with jid=j908814709289.59369 mode=status _async_dir=/root/.ansible_async
Jan 05 20:41:21 compute-0 sudo[59812]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:21 compute-0 sudo[59911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfekbnakolybeczezjaqzsyjvqbmjtrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645676.7054741-295-191352295625839/AnsiballZ_async_status.py'
Jan 05 20:41:21 compute-0 sudo[59911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:21 compute-0 python3.9[59913]: ansible-ansible.legacy.async_status Invoked with jid=j908814709289.59369 mode=cleanup _async_dir=/root/.ansible_async
Jan 05 20:41:21 compute-0 sudo[59911]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:22 compute-0 sudo[60063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvhmchztghrwzkskwvbrmfuzmceuncxy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645682.0297894-322-14428694457704/AnsiballZ_stat.py'
Jan 05 20:41:22 compute-0 sudo[60063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:22 compute-0 python3.9[60065]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/os-net-config.returncode follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:41:22 compute-0 sudo[60063]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:22 compute-0 systemd[1]: systemd-hostnamed.service: Deactivated successfully.
Jan 05 20:41:23 compute-0 sudo[60188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-prikewuinexdmijptvuildeiemuoyihu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645682.0297894-322-14428694457704/AnsiballZ_copy.py'
Jan 05 20:41:23 compute-0 sudo[60188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:23 compute-0 python3.9[60190]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/os-net-config.returncode mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767645682.0297894-322-14428694457704/.source.returncode _original_basename=.eba7a2k4 follow=False checksum=b6589fc6ab0dc82cf12099d1c2d40ab994e8410c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:41:23 compute-0 sudo[60188]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:23 compute-0 sudo[60340]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eplndwomcfcgpyrkothkqblcxppszsoc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645683.57368-338-1341286077447/AnsiballZ_stat.py'
Jan 05 20:41:23 compute-0 sudo[60340]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:24 compute-0 python3.9[60342]: ansible-ansible.legacy.stat Invoked with path=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:41:24 compute-0 sudo[60340]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:24 compute-0 sudo[60464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trzepomiohufmrdrhckvqmruxvuertjt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645683.57368-338-1341286077447/AnsiballZ_copy.py'
Jan 05 20:41:24 compute-0 sudo[60464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:24 compute-0 python3.9[60466]: ansible-ansible.legacy.copy Invoked with dest=/etc/cloud/cloud.cfg.d/99-edpm-disable-network-config.cfg mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767645683.57368-338-1341286077447/.source.cfg _original_basename=.fhec7xu_ follow=False checksum=f3c5952a9cd4c6c31b314b25eb897168971cc86e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:41:24 compute-0 sudo[60464]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:25 compute-0 sudo[60616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpoelrpweyhsydgglxosvbopqzmdjapf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645685.2857985-353-16752793852851/AnsiballZ_systemd.py'
Jan 05 20:41:25 compute-0 sudo[60616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:25 compute-0 python3.9[60618]: ansible-ansible.builtin.systemd Invoked with name=NetworkManager state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:41:26 compute-0 systemd[1]: Reloading Network Manager...
Jan 05 20:41:26 compute-0 NetworkManager[56598]: <info>  [1767645686.0729] audit: op="reload" arg="0" pid=60622 uid=0 result="success"
Jan 05 20:41:26 compute-0 NetworkManager[56598]: <info>  [1767645686.0738] config: signal: SIGHUP,config-files,values,values-user,no-auto-default (/etc/NetworkManager/NetworkManager.conf, /usr/lib/NetworkManager/conf.d/00-server.conf, /run/NetworkManager/conf.d/15-carrier-timeout.conf, /var/lib/NetworkManager/NetworkManager-intern.conf)
Jan 05 20:41:26 compute-0 systemd[1]: Reloaded Network Manager.
Jan 05 20:41:26 compute-0 sudo[60616]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:26 compute-0 sshd-session[52600]: Connection closed by 192.168.122.30 port 60268
Jan 05 20:41:26 compute-0 sshd-session[52597]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:41:26 compute-0 systemd[1]: session-11.scope: Deactivated successfully.
Jan 05 20:41:26 compute-0 systemd[1]: session-11.scope: Consumed 59.229s CPU time.
Jan 05 20:41:26 compute-0 systemd-logind[788]: Session 11 logged out. Waiting for processes to exit.
Jan 05 20:41:26 compute-0 systemd-logind[788]: Removed session 11.
Jan 05 20:41:32 compute-0 sshd-session[60653]: Accepted publickey for zuul from 192.168.122.30 port 33588 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:41:32 compute-0 systemd-logind[788]: New session 12 of user zuul.
Jan 05 20:41:32 compute-0 systemd[1]: Started Session 12 of User zuul.
Jan 05 20:41:32 compute-0 sshd-session[60653]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:41:33 compute-0 python3.9[60806]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:41:34 compute-0 python3.9[60961]: ansible-ansible.builtin.setup Invoked with filter=['ansible_default_ipv4'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 20:41:36 compute-0 python3.9[61150]: ansible-ansible.legacy.command Invoked with _raw_params=hostname -f _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:41:36 compute-0 systemd[1]: NetworkManager-dispatcher.service: Deactivated successfully.
Jan 05 20:41:36 compute-0 sshd-session[60656]: Connection closed by 192.168.122.30 port 33588
Jan 05 20:41:36 compute-0 sshd-session[60653]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:41:36 compute-0 systemd[1]: session-12.scope: Deactivated successfully.
Jan 05 20:41:36 compute-0 systemd[1]: session-12.scope: Consumed 2.904s CPU time.
Jan 05 20:41:36 compute-0 systemd-logind[788]: Session 12 logged out. Waiting for processes to exit.
Jan 05 20:41:36 compute-0 systemd-logind[788]: Removed session 12.
Jan 05 20:41:42 compute-0 sshd-session[61180]: Accepted publickey for zuul from 192.168.122.30 port 38342 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:41:42 compute-0 systemd-logind[788]: New session 13 of user zuul.
Jan 05 20:41:42 compute-0 systemd[1]: Started Session 13 of User zuul.
Jan 05 20:41:42 compute-0 sshd-session[61180]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:41:43 compute-0 python3.9[61334]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:41:44 compute-0 python3.9[61488]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:41:45 compute-0 sudo[61642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkzutjvypmwjzeagpcvhualbmwgszreu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645705.1144173-40-19264166639582/AnsiballZ_setup.py'
Jan 05 20:41:45 compute-0 sudo[61642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:45 compute-0 python3.9[61644]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 20:41:46 compute-0 sudo[61642]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:46 compute-0 sudo[61727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvlogvvmewywwhjueffykfgoznkyzrji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645705.1144173-40-19264166639582/AnsiballZ_dnf.py'
Jan 05 20:41:46 compute-0 sudo[61727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:46 compute-0 python3.9[61729]: ansible-ansible.legacy.dnf Invoked with name=['podman'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:41:48 compute-0 sudo[61727]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:48 compute-0 sudo[61880]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agnjrptdabwuifldshdzfuqskpcerini ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645708.3051486-52-255746806212478/AnsiballZ_setup.py'
Jan 05 20:41:48 compute-0 sudo[61880]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:49 compute-0 python3.9[61882]: ansible-ansible.builtin.setup Invoked with filter=['ansible_interfaces'] gather_subset=['!all', '!min', 'network'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 20:41:49 compute-0 sudo[61880]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:50 compute-0 sudo[62071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yiqzfamrskvlvelzvctjtmlneyuwesap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645709.6479766-63-151889417890817/AnsiballZ_file.py'
Jan 05 20:41:50 compute-0 sudo[62071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:50 compute-0 python3.9[62073]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/containers/networks recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:41:50 compute-0 sudo[62071]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:51 compute-0 sudo[62223]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kxvtzpkyqowwcxwaikvatojlgodyzorb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645710.8011804-71-170583768938195/AnsiballZ_command.py'
Jan 05 20:41:51 compute-0 sudo[62223]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:51 compute-0 python3.9[62225]: ansible-ansible.legacy.command Invoked with _raw_params=podman network inspect podman
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:41:51 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:41:51 compute-0 sudo[62223]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:52 compute-0 sudo[62386]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmioisvklafjmnbuylxwmyzedurhxqrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645711.8251133-79-68711637492408/AnsiballZ_stat.py'
Jan 05 20:41:52 compute-0 sudo[62386]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:52 compute-0 python3.9[62388]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/networks/podman.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:41:52 compute-0 sudo[62386]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:52 compute-0 sudo[62464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivatrvcscngnzliavxxnzkprnwotmzks ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645711.8251133-79-68711637492408/AnsiballZ_file.py'
Jan 05 20:41:52 compute-0 sudo[62464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:53 compute-0 python3.9[62466]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/containers/networks/podman.json _original_basename=podman_network_config.j2 recurse=False state=file path=/etc/containers/networks/podman.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:41:53 compute-0 sudo[62464]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:53 compute-0 sudo[62616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zstfpsdzhejofugppvcclofiswervlyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645713.2178037-91-131818939653518/AnsiballZ_stat.py'
Jan 05 20:41:53 compute-0 sudo[62616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:53 compute-0 python3.9[62618]: ansible-ansible.legacy.stat Invoked with path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:41:54 compute-0 sudo[62616]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:54 compute-0 sudo[62694]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnwqvzpdpwwsdbfhugqxrzznfzeeauvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645713.2178037-91-131818939653518/AnsiballZ_file.py'
Jan 05 20:41:54 compute-0 sudo[62694]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:54 compute-0 python3.9[62696]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root setype=etc_t dest=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf _original_basename=registries.conf.j2 recurse=False state=file path=/etc/containers/registries.conf.d/20-edpm-podman-registries.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:41:54 compute-0 sudo[62694]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:55 compute-0 sudo[62846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cativobebkjzmnuhsqtopgjvoropevfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645714.8178835-104-227638066334770/AnsiballZ_ini_file.py'
Jan 05 20:41:55 compute-0 sudo[62846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:55 compute-0 python3.9[62848]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=pids_limit owner=root path=/etc/containers/containers.conf section=containers setype=etc_t value=4096 backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:41:55 compute-0 sudo[62846]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:56 compute-0 sudo[62998]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agclmyhcazetepyrxvcsdkptsmxzskgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645715.7654982-104-7471245423768/AnsiballZ_ini_file.py'
Jan 05 20:41:56 compute-0 sudo[62998]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:56 compute-0 python3.9[63000]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=events_logger owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="journald" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:41:56 compute-0 sudo[62998]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:56 compute-0 sudo[63150]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dokoixrzenuqbrtmyygdzxayyhxjcmbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645716.5300045-104-159872021924634/AnsiballZ_ini_file.py'
Jan 05 20:41:56 compute-0 sudo[63150]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:57 compute-0 python3.9[63152]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=runtime owner=root path=/etc/containers/containers.conf section=engine setype=etc_t value="crun" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:41:57 compute-0 sudo[63150]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:57 compute-0 sudo[63302]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-msiaxoznxnwrkqxwenqzusvbhpwonuio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645717.2877862-104-250567431910635/AnsiballZ_ini_file.py'
Jan 05 20:41:57 compute-0 sudo[63302]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:57 compute-0 python3.9[63304]: ansible-community.general.ini_file Invoked with create=True group=root mode=0644 option=network_backend owner=root path=/etc/containers/containers.conf section=network setype=etc_t value="netavark" backup=False state=present exclusive=True no_extra_spaces=False ignore_spaces=False allow_no_value=False modify_inactive_option=True follow=False unsafe_writes=False section_has_values=None values=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:41:57 compute-0 sudo[63302]: pam_unix(sudo:session): session closed for user root
Jan 05 20:41:58 compute-0 sudo[63454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffmarfyiqejlcysoopiocrmlxervavsn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645718.1153367-135-193446677635425/AnsiballZ_dnf.py'
Jan 05 20:41:58 compute-0 sudo[63454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:41:58 compute-0 python3.9[63456]: ansible-ansible.legacy.dnf Invoked with name=['openssh-server'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:41:59 compute-0 sudo[63454]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:00 compute-0 sudo[63607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnlffgawmlvpxuvmfjxbyjpbdhzwuswb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645720.4240448-146-202675678110951/AnsiballZ_setup.py'
Jan 05 20:42:00 compute-0 sudo[63607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:01 compute-0 python3.9[63609]: ansible-setup Invoked with gather_subset=['!all', '!min', 'distribution', 'distribution_major_version', 'distribution_version', 'os_family'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:42:01 compute-0 sudo[63607]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:01 compute-0 sudo[63761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlvwqurnhiulazzfqxhhqaowdxposdir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645721.3487067-154-180521101931710/AnsiballZ_stat.py'
Jan 05 20:42:01 compute-0 sudo[63761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:01 compute-0 python3.9[63763]: ansible-stat Invoked with path=/run/ostree-booted follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:42:01 compute-0 sudo[63761]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:02 compute-0 sudo[63913]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snlwpuplrbgbqignboqobnhiivojxnsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645722.2710958-163-231007695537541/AnsiballZ_stat.py'
Jan 05 20:42:02 compute-0 sudo[63913]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:02 compute-0 python3.9[63915]: ansible-stat Invoked with path=/sbin/transactional-update follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:42:02 compute-0 sudo[63913]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:03 compute-0 sudo[64065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dodsmmkexavusvpesdlblsvvzwlgcocy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645723.1931622-173-180816281609639/AnsiballZ_command.py'
Jan 05 20:42:03 compute-0 sudo[64065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:03 compute-0 python3.9[64067]: ansible-ansible.legacy.command Invoked with _raw_params=systemctl is-system-running _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:42:03 compute-0 sudo[64065]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:04 compute-0 sudo[64218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehxzsacflldopmzqjwtegayhbmwlwdxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645724.0532403-183-176644950127414/AnsiballZ_service_facts.py'
Jan 05 20:42:04 compute-0 sudo[64218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:04 compute-0 python3.9[64220]: ansible-service_facts Invoked
Jan 05 20:42:04 compute-0 network[64237]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 05 20:42:04 compute-0 network[64238]: 'network-scripts' will be removed from distribution in near future.
Jan 05 20:42:04 compute-0 network[64239]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 05 20:42:09 compute-0 sudo[64218]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:10 compute-0 sudo[64522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wstaspihuevkkfwuflxmjohwysvizbew ; /bin/bash /home/zuul/.ansible/tmp/ansible-tmp-1767645730.4267893-198-147321883736706/AnsiballZ_timesync_provider.sh /home/zuul/.ansible/tmp/ansible-tmp-1767645730.4267893-198-147321883736706/args'
Jan 05 20:42:10 compute-0 sudo[64522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:10 compute-0 sudo[64522]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:11 compute-0 sudo[64689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqmczplfxqiejglivjxzeujqizqxipoa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645731.2560706-209-5755753497522/AnsiballZ_dnf.py'
Jan 05 20:42:11 compute-0 sudo[64689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:11 compute-0 python3.9[64691]: ansible-ansible.legacy.dnf Invoked with name=['chrony'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:42:13 compute-0 sudo[64689]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:14 compute-0 sudo[64842]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrlcwvroafplkzoenhmoagsxilesjidz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645733.4861777-222-81143665127831/AnsiballZ_package_facts.py'
Jan 05 20:42:14 compute-0 sudo[64842]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:15 compute-0 python3.9[64844]: ansible-package_facts Invoked with manager=['auto'] strategy=first
Jan 05 20:42:15 compute-0 sudo[64842]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:16 compute-0 sudo[64994]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtpifcmewzylwdfebvabevmxbsvtsxpb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645736.1380916-232-212937693338805/AnsiballZ_stat.py'
Jan 05 20:42:16 compute-0 sudo[64994]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:16 compute-0 python3.9[64996]: ansible-ansible.legacy.stat Invoked with path=/etc/chrony.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:42:16 compute-0 sudo[64994]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:17 compute-0 sudo[65119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zonlwpfjymxgerwgezrmlpwdhnerkuri ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645736.1380916-232-212937693338805/AnsiballZ_copy.py'
Jan 05 20:42:17 compute-0 sudo[65119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:17 compute-0 python3.9[65121]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/chrony.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767645736.1380916-232-212937693338805/.source.conf follow=False _original_basename=chrony.conf.j2 checksum=cfb003e56d02d0d2c65555452eb1a05073fecdad force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:42:17 compute-0 sudo[65119]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:18 compute-0 sudo[65273]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwtuxecmwcxdosadqlbetypqdpjraysm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645737.9417531-247-105813550238655/AnsiballZ_stat.py'
Jan 05 20:42:18 compute-0 sudo[65273]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:18 compute-0 python3.9[65275]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/chronyd follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:42:18 compute-0 sudo[65273]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:19 compute-0 sudo[65398]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpzagtndkoqojdsqtvvgautoyghojbwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645737.9417531-247-105813550238655/AnsiballZ_copy.py'
Jan 05 20:42:19 compute-0 sudo[65398]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:19 compute-0 python3.9[65400]: ansible-ansible.legacy.copy Invoked with backup=True dest=/etc/sysconfig/chronyd mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767645737.9417531-247-105813550238655/.source follow=False _original_basename=chronyd.sysconfig.j2 checksum=dd196b1ff1f915b23eebc37ec77405b5dd3df76c force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:42:19 compute-0 sudo[65398]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:22 compute-0 sudo[65552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qkvnkcnsnedwzwhsaxdzquvcsdqphdwz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645739.8058622-268-143491429166412/AnsiballZ_lineinfile.py'
Jan 05 20:42:22 compute-0 sudo[65552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:22 compute-0 python3.9[65554]: ansible-lineinfile Invoked with backup=True create=True dest=/etc/sysconfig/network line=PEERNTP=no mode=0644 regexp=^PEERNTP= state=present path=/etc/sysconfig/network encoding=utf-8 backrefs=False firstmatch=False unsafe_writes=False search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:42:22 compute-0 sudo[65552]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:23 compute-0 sudo[65706]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kasrbqglilczsfriksiycknqdgtrdnpv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645743.1445067-283-135094282622705/AnsiballZ_setup.py'
Jan 05 20:42:23 compute-0 sudo[65706]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:23 compute-0 python3.9[65708]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 20:42:24 compute-0 sudo[65706]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:24 compute-0 sudo[65790]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkgqgouloxyoqoklrpnafkhhvdtkguvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645743.1445067-283-135094282622705/AnsiballZ_systemd.py'
Jan 05 20:42:24 compute-0 sudo[65790]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:24 compute-0 python3.9[65792]: ansible-ansible.legacy.systemd Invoked with enabled=True name=chronyd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:42:25 compute-0 sudo[65790]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:25 compute-0 sudo[65944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ltqnalcbhbdqrhcuuiiqhicgquuhzdcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645745.517828-299-231251427636215/AnsiballZ_setup.py'
Jan 05 20:42:25 compute-0 sudo[65944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:26 compute-0 python3.9[65946]: ansible-ansible.legacy.setup Invoked with gather_subset=['!all'] filter=['ansible_service_mgr'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 20:42:26 compute-0 sudo[65944]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:26 compute-0 sudo[66028]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzkwaophpctmjubamqeumahooujnvxcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645745.517828-299-231251427636215/AnsiballZ_systemd.py'
Jan 05 20:42:26 compute-0 sudo[66028]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:27 compute-0 python3.9[66030]: ansible-ansible.legacy.systemd Invoked with name=chronyd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:42:27 compute-0 chronyd[798]: chronyd exiting
Jan 05 20:42:27 compute-0 systemd[1]: Stopping NTP client/server...
Jan 05 20:42:27 compute-0 systemd[1]: chronyd.service: Deactivated successfully.
Jan 05 20:42:27 compute-0 systemd[1]: Stopped NTP client/server.
Jan 05 20:42:27 compute-0 systemd[1]: Starting NTP client/server...
Jan 05 20:42:27 compute-0 chronyd[66040]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER +SIGND +NTS +SECHASH +IPV6 +DEBUG)
Jan 05 20:42:27 compute-0 chronyd[66040]: Frequency -26.199 +/- 0.220 ppm read from /var/lib/chrony/drift
Jan 05 20:42:27 compute-0 chronyd[66040]: Loaded seccomp filter (level 2)
Jan 05 20:42:27 compute-0 systemd[1]: Started NTP client/server.
Jan 05 20:42:27 compute-0 sudo[66028]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:27 compute-0 sshd-session[61183]: Connection closed by 192.168.122.30 port 38342
Jan 05 20:42:27 compute-0 sshd-session[61180]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:42:27 compute-0 systemd[1]: session-13.scope: Deactivated successfully.
Jan 05 20:42:27 compute-0 systemd[1]: session-13.scope: Consumed 30.671s CPU time.
Jan 05 20:42:27 compute-0 systemd-logind[788]: Session 13 logged out. Waiting for processes to exit.
Jan 05 20:42:27 compute-0 systemd-logind[788]: Removed session 13.
Jan 05 20:42:28 compute-0 sshd-session[66066]: Invalid user admin from 43.226.60.137 port 45648
Jan 05 20:42:28 compute-0 sshd-session[66066]: Connection closed by invalid user admin 43.226.60.137 port 45648 [preauth]
Jan 05 20:42:33 compute-0 sshd-session[66068]: Accepted publickey for zuul from 192.168.122.30 port 35572 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:42:33 compute-0 systemd-logind[788]: New session 14 of user zuul.
Jan 05 20:42:33 compute-0 systemd[1]: Started Session 14 of User zuul.
Jan 05 20:42:33 compute-0 sshd-session[66068]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:42:34 compute-0 python3.9[66221]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:42:35 compute-0 sudo[66375]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bqipbzapgmdrlhafflmbpcjnvxjjodgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645754.920571-33-168947951523880/AnsiballZ_file.py'
Jan 05 20:42:35 compute-0 sudo[66375]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:35 compute-0 python3.9[66377]: ansible-ansible.builtin.file Invoked with group=zuul mode=0770 owner=zuul path=/root/.config/containers recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:42:35 compute-0 sudo[66375]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:36 compute-0 sudo[66550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wegjxkvzsswqskczlzsemlbwsmgnqblq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645755.8715627-41-258938581161816/AnsiballZ_stat.py'
Jan 05 20:42:36 compute-0 sudo[66550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:36 compute-0 python3.9[66552]: ansible-ansible.legacy.stat Invoked with path=/root/.config/containers/auth.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:42:36 compute-0 sudo[66550]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:37 compute-0 sudo[66628]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-teukqsbsejlvbopizsnseabkgpznwagr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645755.8715627-41-258938581161816/AnsiballZ_file.py'
Jan 05 20:42:37 compute-0 sudo[66628]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:37 compute-0 python3.9[66630]: ansible-ansible.legacy.file Invoked with group=zuul mode=0660 owner=zuul dest=/root/.config/containers/auth.json _original_basename=.xrna4kkb recurse=False state=file path=/root/.config/containers/auth.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:42:37 compute-0 sudo[66628]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:38 compute-0 sudo[66780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwkqnfczcebwunxdyohifewztzizgrxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645757.6866143-61-210820150128032/AnsiballZ_stat.py'
Jan 05 20:42:38 compute-0 sudo[66780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:38 compute-0 python3.9[66782]: ansible-ansible.legacy.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:42:38 compute-0 sudo[66780]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:38 compute-0 sudo[66903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wxgiaxotkoylrnnbwknqqelwywcbvtyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645757.6866143-61-210820150128032/AnsiballZ_copy.py'
Jan 05 20:42:38 compute-0 sudo[66903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:39 compute-0 python3.9[66905]: ansible-ansible.legacy.copy Invoked with dest=/etc/sysconfig/podman_drop_in mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767645757.6866143-61-210820150128032/.source _original_basename=.kkn7l93_ follow=False checksum=125299ce8dea7711a76292961206447f0043248b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:42:39 compute-0 sudo[66903]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:39 compute-0 sudo[67055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahfgpecxsizhlakaptpsmxydavlaxplu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645759.494455-77-92902700381284/AnsiballZ_file.py'
Jan 05 20:42:39 compute-0 sudo[67055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:40 compute-0 python3.9[67057]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:42:40 compute-0 sudo[67055]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:40 compute-0 sudo[67207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etiydbmfrszntqtqhbfmwstoxdflrmyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645760.2445073-85-222901397268693/AnsiballZ_stat.py'
Jan 05 20:42:40 compute-0 sudo[67207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:40 compute-0 python3.9[67209]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:42:40 compute-0 sudo[67207]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:41 compute-0 sudo[67330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mpzlxllhxjwakookmxmfntbjivwczheh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645760.2445073-85-222901397268693/AnsiballZ_copy.py'
Jan 05 20:42:41 compute-0 sudo[67330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:41 compute-0 python3.9[67332]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-container-shutdown group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767645760.2445073-85-222901397268693/.source _original_basename=edpm-container-shutdown follow=False checksum=632c3792eb3dce4288b33ae7b265b71950d69f13 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:42:41 compute-0 sudo[67330]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:41 compute-0 sudo[67482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-poozztztuchnccbqayldvgjfobqdtzfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645761.6480355-85-48018510471521/AnsiballZ_stat.py'
Jan 05 20:42:41 compute-0 sudo[67482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:42 compute-0 python3.9[67484]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:42:42 compute-0 sudo[67482]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:42 compute-0 sudo[67605]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsibrcgormadwgnfarhvgsyygqaljlpz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645761.6480355-85-48018510471521/AnsiballZ_copy.py'
Jan 05 20:42:42 compute-0 sudo[67605]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:42 compute-0 python3.9[67607]: ansible-ansible.legacy.copy Invoked with dest=/var/local/libexec/edpm-start-podman-container group=root mode=0700 owner=root setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767645761.6480355-85-48018510471521/.source _original_basename=edpm-start-podman-container follow=False checksum=b963c569d75a655c0ccae95d9bb4a2a9a4df27d1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:42:42 compute-0 sudo[67605]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:43 compute-0 sudo[67757]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spncacvpkzanhrovnwhqtkkncjwcixin ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645762.987564-114-185781245562109/AnsiballZ_file.py'
Jan 05 20:42:43 compute-0 sudo[67757]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:43 compute-0 python3.9[67759]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:42:43 compute-0 sudo[67757]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:44 compute-0 sudo[67909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqofdvcnowyncdmoghkpfzfassxfkzfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645763.7678413-122-172330137260100/AnsiballZ_stat.py'
Jan 05 20:42:44 compute-0 sudo[67909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:44 compute-0 python3.9[67911]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:42:44 compute-0 sudo[67909]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:44 compute-0 sudo[68032]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vktscresexukgztxiioahoggfddheill ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645763.7678413-122-172330137260100/AnsiballZ_copy.py'
Jan 05 20:42:44 compute-0 sudo[68032]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:44 compute-0 python3.9[68034]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm-container-shutdown.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645763.7678413-122-172330137260100/.source.service _original_basename=edpm-container-shutdown-service follow=False checksum=6336835cb0f888670cc99de31e19c8c071444d33 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:42:44 compute-0 sudo[68032]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:45 compute-0 sudo[68184]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etcxvswemxjytysqlgstpypenfyzhief ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645765.0942512-137-147732805442058/AnsiballZ_stat.py'
Jan 05 20:42:45 compute-0 sudo[68184]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:45 compute-0 python3.9[68186]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:42:45 compute-0 sudo[68184]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:46 compute-0 sudo[68307]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weibdikldmqzbykwjkvhzvhgugvqoiib ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645765.0942512-137-147732805442058/AnsiballZ_copy.py'
Jan 05 20:42:46 compute-0 sudo[68307]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:46 compute-0 python3.9[68309]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645765.0942512-137-147732805442058/.source.preset _original_basename=91-edpm-container-shutdown-preset follow=False checksum=b275e4375287528cb63464dd32f622c4f142a915 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:42:46 compute-0 sudo[68307]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:47 compute-0 sudo[68459]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pawplqusfnebsdhkgcgzznygxelgvjjz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645766.5443254-152-50101296502745/AnsiballZ_systemd.py'
Jan 05 20:42:47 compute-0 sudo[68459]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:47 compute-0 python3.9[68461]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:42:47 compute-0 systemd[1]: Reloading.
Jan 05 20:42:47 compute-0 systemd-rc-local-generator[68487]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:42:47 compute-0 systemd-sysv-generator[68492]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:42:47 compute-0 systemd[1]: Reloading.
Jan 05 20:42:47 compute-0 systemd-rc-local-generator[68526]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:42:47 compute-0 systemd-sysv-generator[68530]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:42:48 compute-0 systemd[1]: Starting EDPM Container Shutdown...
Jan 05 20:42:48 compute-0 systemd[1]: Finished EDPM Container Shutdown.
Jan 05 20:42:48 compute-0 sudo[68459]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:48 compute-0 sudo[68686]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjcjpfuzakljpmvbteyufmqievkttjzi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645768.2931046-160-36034649002504/AnsiballZ_stat.py'
Jan 05 20:42:48 compute-0 sudo[68686]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:49 compute-0 python3.9[68688]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:42:49 compute-0 sudo[68686]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:49 compute-0 sudo[68809]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqdbgacoheancmibtedwfiugodxjpbqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645768.2931046-160-36034649002504/AnsiballZ_copy.py'
Jan 05 20:42:49 compute-0 sudo[68809]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:49 compute-0 python3.9[68811]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/netns-placeholder.service group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645768.2931046-160-36034649002504/.source.service _original_basename=netns-placeholder-service follow=False checksum=b61b1b5918c20c877b8b226fbf34ff89a082d972 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:42:49 compute-0 sudo[68809]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:50 compute-0 sudo[68961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfbfddiyzhxgvodzkmicdywemmnpacie ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645769.9497151-175-149355725597844/AnsiballZ_stat.py'
Jan 05 20:42:50 compute-0 sudo[68961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:50 compute-0 python3.9[68963]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:42:50 compute-0 sudo[68961]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:51 compute-0 sudo[69084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rsifflpzknepzmiywgssuqtxuolwtrps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645769.9497151-175-149355725597844/AnsiballZ_copy.py'
Jan 05 20:42:51 compute-0 sudo[69084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:51 compute-0 python3.9[69086]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system-preset/91-netns-placeholder.preset group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645769.9497151-175-149355725597844/.source.preset _original_basename=91-netns-placeholder-preset follow=False checksum=28b7b9aa893525d134a1eeda8a0a48fb25b736b9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:42:51 compute-0 sudo[69084]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:52 compute-0 sudo[69236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzqyqyqkmtznaiqgznrehzkjawewgfxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645772.0235026-190-39579736440931/AnsiballZ_systemd.py'
Jan 05 20:42:52 compute-0 sudo[69236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:52 compute-0 python3.9[69238]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:42:52 compute-0 systemd[1]: Reloading.
Jan 05 20:42:52 compute-0 systemd-rc-local-generator[69266]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:42:52 compute-0 systemd-sysv-generator[69271]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:42:52 compute-0 systemd[1]: Reloading.
Jan 05 20:42:53 compute-0 systemd-rc-local-generator[69301]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:42:53 compute-0 systemd-sysv-generator[69306]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:42:53 compute-0 systemd[1]: Starting Create netns directory...
Jan 05 20:42:53 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 05 20:42:53 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 05 20:42:53 compute-0 systemd[1]: Finished Create netns directory.
Jan 05 20:42:53 compute-0 sudo[69236]: pam_unix(sudo:session): session closed for user root
Jan 05 20:42:54 compute-0 python3.9[69465]: ansible-ansible.builtin.service_facts Invoked
Jan 05 20:42:54 compute-0 network[69482]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 05 20:42:54 compute-0 network[69483]: 'network-scripts' will be removed from distribution in near future.
Jan 05 20:42:54 compute-0 network[69484]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 05 20:42:59 compute-0 sudo[69746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sczcgnpwcpltkedtdepsyrrncdcrqtel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645779.1695611-206-76840570434917/AnsiballZ_systemd.py'
Jan 05 20:42:59 compute-0 sudo[69746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:42:59 compute-0 sshd-session[69595]: Invalid user orangepi from 43.226.60.137 port 42704
Jan 05 20:42:59 compute-0 python3.9[69748]: ansible-ansible.builtin.systemd Invoked with enabled=False name=iptables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:43:00 compute-0 systemd[1]: Reloading.
Jan 05 20:43:00 compute-0 systemd-rc-local-generator[69780]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:43:00 compute-0 systemd-sysv-generator[69784]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:43:00 compute-0 sshd-session[69595]: Connection closed by invalid user orangepi 43.226.60.137 port 42704 [preauth]
Jan 05 20:43:00 compute-0 systemd[1]: Stopping IPv4 firewall with iptables...
Jan 05 20:43:00 compute-0 iptables.init[69789]: iptables: Setting chains to policy ACCEPT: raw mangle filter nat [  OK  ]
Jan 05 20:43:00 compute-0 iptables.init[69789]: iptables: Flushing firewall rules: [  OK  ]
Jan 05 20:43:00 compute-0 systemd[1]: iptables.service: Deactivated successfully.
Jan 05 20:43:00 compute-0 systemd[1]: Stopped IPv4 firewall with iptables.
Jan 05 20:43:00 compute-0 sudo[69746]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:01 compute-0 sudo[69984]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bskiafaehiscqbntwciejvzwwhhsdytf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645780.944642-206-253860309076335/AnsiballZ_systemd.py'
Jan 05 20:43:01 compute-0 sudo[69984]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:01 compute-0 python3.9[69986]: ansible-ansible.builtin.systemd Invoked with enabled=False name=ip6tables.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:43:01 compute-0 sudo[69984]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:02 compute-0 sudo[70138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mcplhdkptegrlpyigbvclgpetxsspwjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645782.218681-222-238993044457520/AnsiballZ_systemd.py'
Jan 05 20:43:02 compute-0 sudo[70138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:02 compute-0 python3.9[70140]: ansible-ansible.builtin.systemd Invoked with enabled=True name=nftables state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:43:04 compute-0 systemd[1]: Reloading.
Jan 05 20:43:04 compute-0 systemd-rc-local-generator[70170]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:43:04 compute-0 systemd-sysv-generator[70174]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:43:04 compute-0 systemd[1]: Starting Netfilter Tables...
Jan 05 20:43:04 compute-0 systemd[1]: Finished Netfilter Tables.
Jan 05 20:43:04 compute-0 sudo[70138]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:05 compute-0 sudo[70331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvdgzaoocwdxcjzzyccvzyifizkunhuy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645784.5420887-230-22263805137907/AnsiballZ_command.py'
Jan 05 20:43:05 compute-0 sudo[70331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:05 compute-0 python3.9[70333]: ansible-ansible.legacy.command Invoked with _raw_params=nft flush ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:43:05 compute-0 sudo[70331]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:06 compute-0 sudo[70484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbnllcviyxyttshpgxvbyaorordoswbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645785.8817482-244-204795599669489/AnsiballZ_stat.py'
Jan 05 20:43:06 compute-0 sudo[70484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:06 compute-0 python3.9[70486]: ansible-ansible.legacy.stat Invoked with path=/etc/ssh/sshd_config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:43:06 compute-0 sudo[70484]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:07 compute-0 sudo[70609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wsjexczsoulzjxxtjfweljavdjjgfxtp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645785.8817482-244-204795599669489/AnsiballZ_copy.py'
Jan 05 20:43:07 compute-0 sudo[70609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:07 compute-0 python3.9[70611]: ansible-ansible.legacy.copy Invoked with dest=/etc/ssh/sshd_config mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1767645785.8817482-244-204795599669489/.source validate=/usr/sbin/sshd -T -f %s follow=False _original_basename=sshd_config_block.j2 checksum=6c79f4cb960ad444688fde322eeacb8402e22d79 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:07 compute-0 sudo[70609]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:07 compute-0 sudo[70762]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hzvpsdnifamojjfepazeaclxiarphwwf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645787.571093-259-272844340713318/AnsiballZ_systemd.py'
Jan 05 20:43:07 compute-0 sudo[70762]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:08 compute-0 python3.9[70764]: ansible-ansible.builtin.systemd Invoked with name=sshd state=reloaded daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:43:08 compute-0 systemd[1]: Reloading OpenSSH server daemon...
Jan 05 20:43:08 compute-0 sshd[1007]: Received SIGHUP; restarting.
Jan 05 20:43:08 compute-0 systemd[1]: Reloaded OpenSSH server daemon.
Jan 05 20:43:08 compute-0 sshd[1007]: Server listening on 0.0.0.0 port 22.
Jan 05 20:43:08 compute-0 sshd[1007]: Server listening on :: port 22.
Jan 05 20:43:08 compute-0 sudo[70762]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:09 compute-0 sudo[70918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khugqfevkwjhjxppmadefrulnlvdfkkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645788.6497529-267-240989390482889/AnsiballZ_file.py'
Jan 05 20:43:09 compute-0 sudo[70918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:09 compute-0 python3.9[70920]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:09 compute-0 sudo[70918]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:10 compute-0 sudo[71070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojqxyriazdzrrbdbsmgbjdyvjisrjclv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645789.6831295-275-229499038758389/AnsiballZ_stat.py'
Jan 05 20:43:10 compute-0 sudo[71070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:10 compute-0 python3.9[71072]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/sshd-networks.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:43:10 compute-0 sudo[71070]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:10 compute-0 sudo[71193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gvcaytvogekjgmmvqambyxorxywblsnl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645789.6831295-275-229499038758389/AnsiballZ_copy.py'
Jan 05 20:43:10 compute-0 sudo[71193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:10 compute-0 python3.9[71195]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/sshd-networks.yaml group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645789.6831295-275-229499038758389/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=0bfc8440fd8f39002ab90252479fb794f51b5ae8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:10 compute-0 sudo[71193]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:11 compute-0 sudo[71345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zepxmqplyaiwofmmnkfirujpgfppgfsc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645791.2524085-293-240010111673313/AnsiballZ_timezone.py'
Jan 05 20:43:11 compute-0 sudo[71345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:11 compute-0 python3.9[71347]: ansible-community.general.timezone Invoked with name=UTC hwclock=None
Jan 05 20:43:12 compute-0 systemd[1]: Starting Time & Date Service...
Jan 05 20:43:12 compute-0 systemd[1]: Started Time & Date Service.
Jan 05 20:43:12 compute-0 sudo[71345]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:12 compute-0 sudo[71501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuwpfiikjupahxhsnfcfbqmoswuijqir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645792.4576848-302-196389943716015/AnsiballZ_file.py'
Jan 05 20:43:12 compute-0 sudo[71501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:13 compute-0 python3.9[71503]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:13 compute-0 sudo[71501]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:13 compute-0 sudo[71653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wplwrdzapmzirtzcbljnvribcgzcossq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645793.3308012-310-208054701358999/AnsiballZ_stat.py'
Jan 05 20:43:13 compute-0 sudo[71653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:13 compute-0 python3.9[71655]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:43:13 compute-0 sudo[71653]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:14 compute-0 sudo[71776]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wnwahouxigtsvvpmoumrsbhboaqxvied ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645793.3308012-310-208054701358999/AnsiballZ_copy.py'
Jan 05 20:43:14 compute-0 sudo[71776]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:14 compute-0 python3.9[71778]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767645793.3308012-310-208054701358999/.source.yaml follow=False _original_basename=base-rules.yaml.j2 checksum=450456afcafded6d4bdecceec7a02e806eebd8b3 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:14 compute-0 sudo[71776]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:15 compute-0 sudo[71928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wrrhzxpjpikfxxfqfbucktbdlnbpeeym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645794.7749178-325-277813394043255/AnsiballZ_stat.py'
Jan 05 20:43:15 compute-0 sudo[71928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:15 compute-0 python3.9[71930]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:43:15 compute-0 sudo[71928]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:15 compute-0 sudo[72051]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sfqsyofodoltfsxtpieikrtjptpvmusv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645794.7749178-325-277813394043255/AnsiballZ_copy.py'
Jan 05 20:43:15 compute-0 sudo[72051]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:16 compute-0 python3.9[72053]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767645794.7749178-325-277813394043255/.source.yaml _original_basename=.8bip939e follow=False checksum=97d170e1550eee4afc0af065b78cda302a97674c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:16 compute-0 sudo[72051]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:16 compute-0 sudo[72203]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lukbbczsbvrbpwyqjlzomaowchlpvnvi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645796.3108833-340-75586166898558/AnsiballZ_stat.py'
Jan 05 20:43:16 compute-0 sudo[72203]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:16 compute-0 python3.9[72205]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:43:16 compute-0 sudo[72203]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:17 compute-0 sudo[72326]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nzfoluheqadkoglqvqfiomnbvpvvpqkj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645796.3108833-340-75586166898558/AnsiballZ_copy.py'
Jan 05 20:43:17 compute-0 sudo[72326]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:17 compute-0 python3.9[72328]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/iptables.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645796.3108833-340-75586166898558/.source.nft _original_basename=iptables.nft follow=False checksum=3e02df08f1f3ab4a513e94056dbd390e3d38fe30 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:17 compute-0 sudo[72326]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:18 compute-0 sudo[72478]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vquqdyydzbbutvxlftpztvsvofhxzvzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645797.9277816-355-13790685179114/AnsiballZ_command.py'
Jan 05 20:43:18 compute-0 sudo[72478]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:18 compute-0 python3.9[72480]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/iptables.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:43:18 compute-0 sudo[72478]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:19 compute-0 sudo[72631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jfxjoljqytsqtxhvgyovyahmsjmqxugp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645798.8113415-363-113025741549229/AnsiballZ_command.py'
Jan 05 20:43:19 compute-0 sudo[72631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:19 compute-0 python3.9[72633]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:43:19 compute-0 sudo[72631]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:20 compute-0 sudo[72784]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cdqadsdfcqbgqaqllkugniuovnpzziza ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767645799.6410487-371-213533887665939/AnsiballZ_edpm_nftables_from_files.py'
Jan 05 20:43:20 compute-0 sudo[72784]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:20 compute-0 python3[72786]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 05 20:43:20 compute-0 sudo[72784]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:21 compute-0 sudo[72936]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtzzisaejwoeiihxpyejvwxagmrifcfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645800.6349444-379-250324146215201/AnsiballZ_stat.py'
Jan 05 20:43:21 compute-0 sudo[72936]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:21 compute-0 python3.9[72938]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:43:21 compute-0 sudo[72936]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:21 compute-0 sudo[73059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zntfujvdqatkhagmbniycihzqjehuigs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645800.6349444-379-250324146215201/AnsiballZ_copy.py'
Jan 05 20:43:21 compute-0 sudo[73059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:21 compute-0 python3.9[73061]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645800.6349444-379-250324146215201/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:21 compute-0 sudo[73059]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:22 compute-0 sudo[73211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iujmbpszadlmzfjacksyafnioqjfjilk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645802.1888156-394-162062376571391/AnsiballZ_stat.py'
Jan 05 20:43:22 compute-0 sudo[73211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:22 compute-0 python3.9[73213]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:43:22 compute-0 sudo[73211]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:23 compute-0 sudo[73334]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixmamhpnywojamycmtxpijrqhayibeot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645802.1888156-394-162062376571391/AnsiballZ_copy.py'
Jan 05 20:43:23 compute-0 sudo[73334]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:23 compute-0 python3.9[73336]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645802.1888156-394-162062376571391/.source.nft follow=False _original_basename=jump-chain.j2 checksum=4c6f036d2d5808f109acc0880c19aa74ca48c961 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:23 compute-0 sudo[73334]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:24 compute-0 sudo[73486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lqncfnnpcdcqfjcokmqvavjowplzvxox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645803.7511845-409-143117539290155/AnsiballZ_stat.py'
Jan 05 20:43:24 compute-0 sudo[73486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:24 compute-0 python3.9[73488]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:43:24 compute-0 sudo[73486]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:24 compute-0 sudo[73609]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgubqomzazjjyofqbdhlgbkvubbeajqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645803.7511845-409-143117539290155/AnsiballZ_copy.py'
Jan 05 20:43:24 compute-0 sudo[73609]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:25 compute-0 python3.9[73611]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645803.7511845-409-143117539290155/.source.nft follow=False _original_basename=flush-chain.j2 checksum=d16337256a56373421842284fe09e4e6c7df417e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:25 compute-0 sudo[73609]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:25 compute-0 sudo[73761]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecnbbxaoxodwiqssztmabawhieejuevl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645805.3670766-424-122077525130625/AnsiballZ_stat.py'
Jan 05 20:43:25 compute-0 sudo[73761]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:26 compute-0 python3.9[73763]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:43:26 compute-0 sudo[73761]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:26 compute-0 sudo[73884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jwhhaeabkwielohybpblrtsggzyywvhx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645805.3670766-424-122077525130625/AnsiballZ_copy.py'
Jan 05 20:43:26 compute-0 sudo[73884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:26 compute-0 python3.9[73886]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645805.3670766-424-122077525130625/.source.nft follow=False _original_basename=chains.j2 checksum=2079f3b60590a165d1d502e763170876fc8e2984 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:26 compute-0 sudo[73884]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:27 compute-0 sudo[74036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rekoojprlwiwsrqjpwajuagwzdghiyla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645807.1150367-439-24214748702694/AnsiballZ_stat.py'
Jan 05 20:43:27 compute-0 sudo[74036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:27 compute-0 python3.9[74038]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:43:27 compute-0 sudo[74036]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:28 compute-0 sudo[74159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-urifjdwrdxmnhwrowiswbgqmzyvxekdo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645807.1150367-439-24214748702694/AnsiballZ_copy.py'
Jan 05 20:43:28 compute-0 sudo[74159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:28 compute-0 python3.9[74161]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645807.1150367-439-24214748702694/.source.nft follow=False _original_basename=ruleset.j2 checksum=15a82a0dc61abfd6aa593407582b5b950437eb80 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:28 compute-0 sudo[74159]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:28 compute-0 sudo[74311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppeiebnblwlvewkjnbasxovwhsobmrlt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645808.585671-454-55041433974707/AnsiballZ_file.py'
Jan 05 20:43:28 compute-0 sudo[74311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:29 compute-0 python3.9[74313]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:29 compute-0 sudo[74311]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:29 compute-0 sudo[74463]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bcbgxplpvoekbfvwxvgfyyujdgoedxtg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645809.4147031-462-115823344493119/AnsiballZ_command.py'
Jan 05 20:43:29 compute-0 sudo[74463]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:29 compute-0 python3.9[74465]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:43:30 compute-0 sudo[74463]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:30 compute-0 sudo[74624]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pvlbqfeojyqlmurnvpjmgrivgqxohwlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645810.3651292-470-240670441960581/AnsiballZ_blockinfile.py'
Jan 05 20:43:30 compute-0 sudo[74624]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:31 compute-0 python3.9[74626]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:31 compute-0 sudo[74624]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:31 compute-0 sudo[74777]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odnrwvjnyfchowglyxmgoduzfkyiwjep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645811.5130458-479-106953214503372/AnsiballZ_file.py'
Jan 05 20:43:31 compute-0 sudo[74777]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:32 compute-0 python3.9[74779]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages1G state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:32 compute-0 sudo[74777]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:32 compute-0 sshd-session[74520]: Connection closed by authenticating user root 43.226.60.137 port 37924 [preauth]
Jan 05 20:43:32 compute-0 sudo[74929]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eaabeogiuijaypsvkeswynsogpvqqaii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645812.2388966-479-113462438437927/AnsiballZ_file.py'
Jan 05 20:43:32 compute-0 sudo[74929]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:32 compute-0 python3.9[74931]: ansible-ansible.builtin.file Invoked with group=hugetlbfs mode=0775 owner=zuul path=/dev/hugepages2M state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:32 compute-0 sudo[74929]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:33 compute-0 sudo[75081]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nxvthkglqdhyoebseluycsgebsreiixp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645812.9879744-494-252247140305794/AnsiballZ_mount.py'
Jan 05 20:43:33 compute-0 sudo[75081]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:33 compute-0 python3.9[75083]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=1G path=/dev/hugepages1G src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 05 20:43:33 compute-0 sudo[75081]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:33 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 20:43:33 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 20:43:34 compute-0 sudo[75235]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bviyeeyrlynzylsckgxqfwoajpmkhepd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645813.9729445-494-120344576807394/AnsiballZ_mount.py'
Jan 05 20:43:34 compute-0 sudo[75235]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:34 compute-0 python3.9[75237]: ansible-ansible.posix.mount Invoked with fstype=hugetlbfs opts=pagesize=2M path=/dev/hugepages2M src=none state=mounted boot=True dump=0 opts_no_log=False passno=0 backup=False fstab=None
Jan 05 20:43:34 compute-0 sudo[75235]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:35 compute-0 sshd-session[66071]: Connection closed by 192.168.122.30 port 35572
Jan 05 20:43:35 compute-0 sshd-session[66068]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:43:35 compute-0 systemd[1]: session-14.scope: Deactivated successfully.
Jan 05 20:43:35 compute-0 systemd-logind[788]: Session 14 logged out. Waiting for processes to exit.
Jan 05 20:43:35 compute-0 systemd[1]: session-14.scope: Consumed 45.100s CPU time.
Jan 05 20:43:35 compute-0 systemd-logind[788]: Removed session 14.
Jan 05 20:43:41 compute-0 sshd-session[75263]: Accepted publickey for zuul from 192.168.122.30 port 36778 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:43:41 compute-0 systemd-logind[788]: New session 15 of user zuul.
Jan 05 20:43:41 compute-0 systemd[1]: Started Session 15 of User zuul.
Jan 05 20:43:41 compute-0 sshd-session[75263]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:43:41 compute-0 sudo[75416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpbcbgrfocrebfmcoitvkbtzbaaybgjg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645821.2956383-16-49467094262234/AnsiballZ_tempfile.py'
Jan 05 20:43:41 compute-0 sudo[75416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:42 compute-0 python3.9[75418]: ansible-ansible.builtin.tempfile Invoked with state=file prefix=ansible. suffix= path=None
Jan 05 20:43:42 compute-0 sudo[75416]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:42 compute-0 systemd[1]: systemd-timedated.service: Deactivated successfully.
Jan 05 20:43:42 compute-0 sudo[75570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-isafbituohypwihxlzxvxztjwonayzlf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645822.268083-28-130913826894625/AnsiballZ_stat.py'
Jan 05 20:43:42 compute-0 sudo[75570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:42 compute-0 python3.9[75572]: ansible-ansible.builtin.stat Invoked with path=/etc/ssh/ssh_known_hosts follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:43:42 compute-0 sudo[75570]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:43 compute-0 sudo[75722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruktzwpgbtqpxfszabtwugeciwoearcl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645823.2504733-38-126216550269661/AnsiballZ_setup.py'
Jan 05 20:43:43 compute-0 sudo[75722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:44 compute-0 python3.9[75724]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'ssh_host_key_rsa_public', 'ssh_host_key_ed25519_public', 'ssh_host_key_ecdsa_public'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:43:44 compute-0 sudo[75722]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:45 compute-0 sudo[75874]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqqpyhddmjpcuuwkdjxgbkpslxcuknla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645824.6336343-47-85973041266863/AnsiballZ_blockinfile.py'
Jan 05 20:43:45 compute-0 sudo[75874]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:45 compute-0 python3.9[75876]: ansible-ansible.builtin.blockinfile Invoked with block=compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-rsa AAAAB3NzaC1yc2EAAAADAQABAAABgQDUlQN8sOxIqqHKSKocyzN7+yPjjb7EmdqzkYZKCir9tnl7JbMd4Wcx1N7VToJCIcew7ZmoqScRKHZ/p82bgySeGIsbiQm0Lp4IzSjQOm7nliwlQivQbS+YOovYXZmr6W37PxS5koSKGNq8ItpSkwRpnklYeqvTzTkvvQxx/tzsF99thXTCUtECOxHTMerQ/c+FUtP4Hnz/hkFjlwAhtmCuT3QptdmX2k5tFPW9HrW/MKJJGBwmrDr16Rv6KSaoJNEwlGs746+JV1Qdhqjvp9TEqy1ERJa5wZp/lBgqiNChSbQfnPYuM6D+OaBhe+DlSKw19juFr3nrL2Jr5shEpihXrnuv4YKE7gg/Tf39LqyNMj8XnKX8CR79LrzcyVm+tQK5ULnfW1t8JidKQTo3I/iYkiQiE0oCy/AGdDRsSUcIW5BA0B4hL/Zi91VUFBgcKtNz1oPfXMDs4rg/InevxmqqsdMi77Ixz3a+hU35H5Sj1NcgNcwZNAdBWtt797ZCOiE=
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ssh-ed25519 AAAAC3NzaC1lZDI1NTE5AAAAIB1No98zUn/jFal5ac8unHP5SGOYSvPNE5zn+US7d+F+
                                            compute-0.ctlplane.example.com,192.168.122.100,compute-0* ecdsa-sha2-nistp256 AAAAE2VjZHNhLXNoYTItbmlzdHAyNTYAAAAIbmlzdHAyNTYAAABBBK8uC2zh4NXdx4st2cpAcH6pRAKF144ll+N0lNUgrZPzev4j7wZbz3W9ZbCIW+beDlXXTQfOknP+YewHw7LVQrU=
                                             create=True mode=0644 path=/tmp/ansible.eggmk5u3 state=present marker=# {mark} ANSIBLE MANAGED BLOCK backup=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:45 compute-0 sudo[75874]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:46 compute-0 sudo[76026]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpclwnghxewhquncmkxouvnwffotjjly ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645825.6794596-55-112700006284665/AnsiballZ_command.py'
Jan 05 20:43:46 compute-0 sudo[76026]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:46 compute-0 python3.9[76028]: ansible-ansible.legacy.command Invoked with _raw_params=cat '/tmp/ansible.eggmk5u3' > /etc/ssh/ssh_known_hosts _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:43:46 compute-0 sudo[76026]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:47 compute-0 sudo[76180]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhucxvufweponhfnbolcktcfyuvfmtol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645826.6954627-63-258175385974416/AnsiballZ_file.py'
Jan 05 20:43:47 compute-0 sudo[76180]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:47 compute-0 python3.9[76182]: ansible-ansible.builtin.file Invoked with path=/tmp/ansible.eggmk5u3 state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:43:47 compute-0 sudo[76180]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:47 compute-0 sshd-session[75266]: Connection closed by 192.168.122.30 port 36778
Jan 05 20:43:47 compute-0 sshd-session[75263]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:43:47 compute-0 systemd[1]: session-15.scope: Deactivated successfully.
Jan 05 20:43:47 compute-0 systemd[1]: session-15.scope: Consumed 4.426s CPU time.
Jan 05 20:43:47 compute-0 systemd-logind[788]: Session 15 logged out. Waiting for processes to exit.
Jan 05 20:43:47 compute-0 systemd-logind[788]: Removed session 15.
Jan 05 20:43:53 compute-0 sshd-session[76207]: Accepted publickey for zuul from 192.168.122.30 port 44134 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:43:53 compute-0 systemd-logind[788]: New session 16 of user zuul.
Jan 05 20:43:53 compute-0 systemd[1]: Started Session 16 of User zuul.
Jan 05 20:43:53 compute-0 sshd-session[76207]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:43:54 compute-0 python3.9[76360]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:43:55 compute-0 sudo[76514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwhmsevgsrgwudpbrlweprufeupqklai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645835.1575077-32-236273552192615/AnsiballZ_systemd.py'
Jan 05 20:43:55 compute-0 sudo[76514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:56 compute-0 python3.9[76516]: ansible-ansible.builtin.systemd Invoked with enabled=True name=sshd daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 05 20:43:56 compute-0 sudo[76514]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:56 compute-0 sudo[76668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wycoadwsukukcdwzyoacoubycnpyvlxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645836.4592838-40-90951953120133/AnsiballZ_systemd.py'
Jan 05 20:43:56 compute-0 sudo[76668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:57 compute-0 python3.9[76670]: ansible-ansible.builtin.systemd Invoked with name=sshd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:43:58 compute-0 sudo[76668]: pam_unix(sudo:session): session closed for user root
Jan 05 20:43:59 compute-0 sudo[76821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iycnwiaotjlgqskongwyvgfqleyaezhj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645838.5427358-49-189807423439733/AnsiballZ_command.py'
Jan 05 20:43:59 compute-0 sudo[76821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:43:59 compute-0 python3.9[76823]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:43:59 compute-0 sudo[76821]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:00 compute-0 sudo[76974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rfpystpkwaovjuiicnxcertrqhtpmlbe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645839.5318005-57-41362117647327/AnsiballZ_stat.py'
Jan 05 20:44:00 compute-0 sudo[76974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:00 compute-0 python3.9[76976]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:44:00 compute-0 sudo[76974]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:01 compute-0 sudo[77128]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tqxdeoiqddbgvjbobaixzbvspwmahhcm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645840.5129385-65-30390305117125/AnsiballZ_command.py'
Jan 05 20:44:01 compute-0 sudo[77128]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:01 compute-0 python3.9[77130]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:44:01 compute-0 sudo[77128]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:02 compute-0 sudo[77283]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bstauipoevmhoultjlmcphlnlnveqmeg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645841.5132427-73-144557921713365/AnsiballZ_file.py'
Jan 05 20:44:02 compute-0 sudo[77283]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:02 compute-0 python3.9[77285]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:02 compute-0 sudo[77283]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:02 compute-0 sshd-session[76210]: Connection closed by 192.168.122.30 port 44134
Jan 05 20:44:02 compute-0 sshd-session[76207]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:44:02 compute-0 systemd[1]: session-16.scope: Deactivated successfully.
Jan 05 20:44:02 compute-0 systemd[1]: session-16.scope: Consumed 5.665s CPU time.
Jan 05 20:44:02 compute-0 systemd-logind[788]: Session 16 logged out. Waiting for processes to exit.
Jan 05 20:44:02 compute-0 systemd-logind[788]: Removed session 16.
Jan 05 20:44:08 compute-0 sshd-session[77313]: Accepted publickey for zuul from 192.168.122.30 port 33854 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:44:08 compute-0 systemd-logind[788]: New session 17 of user zuul.
Jan 05 20:44:08 compute-0 systemd[1]: Started Session 17 of User zuul.
Jan 05 20:44:08 compute-0 sshd-session[77313]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:44:09 compute-0 python3.9[77466]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:44:10 compute-0 sudo[77620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwhrtzlqboylwnrymlkxfpugdqufgmdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645849.7533703-34-276644882597931/AnsiballZ_setup.py'
Jan 05 20:44:10 compute-0 sudo[77620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:10 compute-0 python3.9[77622]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 20:44:10 compute-0 sudo[77620]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:11 compute-0 sudo[77704]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbvpdmpdytidtdqdahtjwydghutdjxnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645849.7533703-34-276644882597931/AnsiballZ_dnf.py'
Jan 05 20:44:11 compute-0 sudo[77704]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:11 compute-0 python3.9[77706]: ansible-ansible.legacy.dnf Invoked with name=['yum-utils'] allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None state=None
Jan 05 20:44:12 compute-0 sudo[77704]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:13 compute-0 python3.9[77857]: ansible-ansible.legacy.command Invoked with _raw_params=needs-restarting -r _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:44:15 compute-0 python3.9[78008]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/reboot_required/'] patterns=[] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 05 20:44:16 compute-0 python3.9[78158]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:44:16 compute-0 python3.9[78308]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:44:17 compute-0 sshd-session[77316]: Connection closed by 192.168.122.30 port 33854
Jan 05 20:44:17 compute-0 sshd-session[77313]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:44:17 compute-0 systemd-logind[788]: Session 17 logged out. Waiting for processes to exit.
Jan 05 20:44:17 compute-0 systemd[1]: session-17.scope: Deactivated successfully.
Jan 05 20:44:17 compute-0 systemd[1]: session-17.scope: Consumed 6.703s CPU time.
Jan 05 20:44:17 compute-0 systemd-logind[788]: Removed session 17.
Jan 05 20:44:22 compute-0 sshd-session[78333]: Accepted publickey for zuul from 192.168.122.30 port 53644 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:44:22 compute-0 systemd-logind[788]: New session 18 of user zuul.
Jan 05 20:44:22 compute-0 systemd[1]: Started Session 18 of User zuul.
Jan 05 20:44:22 compute-0 sshd-session[78333]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:44:23 compute-0 python3.9[78486]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:44:25 compute-0 sudo[78640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-budwrfhjjefzscbduuprzzutzvymlvxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645864.9872272-50-13336376757300/AnsiballZ_file.py'
Jan 05 20:44:25 compute-0 sudo[78640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:25 compute-0 python3.9[78642]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:44:25 compute-0 sudo[78640]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:26 compute-0 sudo[78792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iabvfbvwzoycqwpjvmidnodxgqhzxvqu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645865.88239-50-246757528003882/AnsiballZ_file.py'
Jan 05 20:44:26 compute-0 sudo[78792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:26 compute-0 python3.9[78794]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry-power-monitoring/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:44:26 compute-0 sudo[78792]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:27 compute-0 sudo[78944]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srhxrmlpkbuzrbkmhbhymeyvyrwcmxyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645866.6818082-65-210739997313010/AnsiballZ_stat.py'
Jan 05 20:44:27 compute-0 sudo[78944]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:27 compute-0 python3.9[78946]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:44:27 compute-0 sudo[78944]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:28 compute-0 sudo[79067]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wduebzgoyunpqvnnuoyvfgobqqkcyify ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645866.6818082-65-210739997313010/AnsiballZ_copy.py'
Jan 05 20:44:28 compute-0 sudo[79067]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:28 compute-0 python3.9[79069]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645866.6818082-65-210739997313010/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=68ce71c0da9760c4c757a7db7d084e62b36069cb backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:28 compute-0 sudo[79067]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:28 compute-0 sudo[79219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tslcqhagjwgqljzqfbgjzwyuhzwjlcxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645868.4175892-65-159378590793514/AnsiballZ_stat.py'
Jan 05 20:44:28 compute-0 sudo[79219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:28 compute-0 python3.9[79221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:44:28 compute-0 sudo[79219]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:29 compute-0 sudo[79342]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsvekprejueiujdfeaccttswiksgnbwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645868.4175892-65-159378590793514/AnsiballZ_copy.py'
Jan 05 20:44:29 compute-0 sudo[79342]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:29 compute-0 python3.9[79344]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645868.4175892-65-159378590793514/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=868a8a93fbc124dc6310269f666f3a59af9199f6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:29 compute-0 sudo[79342]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:30 compute-0 sudo[79494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-suxmsxrgymrvlemanvgcqyxugkzxybqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645869.764231-65-44125005521505/AnsiballZ_stat.py'
Jan 05 20:44:30 compute-0 sudo[79494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:30 compute-0 python3.9[79496]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:44:30 compute-0 sudo[79494]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:30 compute-0 sudo[79617]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ashaamrrfqwcsbeqyxsrtoswtifirxto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645869.764231-65-44125005521505/AnsiballZ_copy.py'
Jan 05 20:44:30 compute-0 sudo[79617]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:30 compute-0 python3.9[79619]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645869.764231-65-44125005521505/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=3aa14c7bed65f0943176a352333b0747da64c877 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:30 compute-0 sudo[79617]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:31 compute-0 sudo[79769]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iftxffbjudzkixsqahzmyksritgjhaac ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645871.240971-109-58787655996374/AnsiballZ_file.py'
Jan 05 20:44:31 compute-0 sudo[79769]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:31 compute-0 python3.9[79771]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:44:31 compute-0 sudo[79769]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:32 compute-0 sudo[79921]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-muvxagyvfxfwworfdbvjetnyzmroyget ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645872.0552652-109-118665597171633/AnsiballZ_file.py'
Jan 05 20:44:32 compute-0 sudo[79921]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:32 compute-0 python3.9[79923]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/telemetry/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:44:32 compute-0 sudo[79921]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:33 compute-0 sudo[80073]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjlvsbmcbjiamltsynlrozkggpbumqzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645872.8236034-124-149536633284020/AnsiballZ_stat.py'
Jan 05 20:44:33 compute-0 sudo[80073]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:33 compute-0 python3.9[80075]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:44:33 compute-0 sudo[80073]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:33 compute-0 sudo[80196]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cqognrawsygvrwntlelxpttpdfkeeaow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645872.8236034-124-149536633284020/AnsiballZ_copy.py'
Jan 05 20:44:33 compute-0 sudo[80196]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:33 compute-0 python3.9[80198]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645872.8236034-124-149536633284020/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=b3d4fb1dc1f39cc6ada72d74e08c6682f1b122fa backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:33 compute-0 sudo[80196]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:34 compute-0 sudo[80348]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pphysiemupqeldqlqfzneqzsjaxduyyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645874.1183345-124-208765779526420/AnsiballZ_stat.py'
Jan 05 20:44:34 compute-0 sudo[80348]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:34 compute-0 python3.9[80350]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:44:34 compute-0 sudo[80348]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:35 compute-0 sudo[80471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfzzyylvilhkmzdexhrmmsatbanplaqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645874.1183345-124-208765779526420/AnsiballZ_copy.py'
Jan 05 20:44:35 compute-0 sudo[80471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:35 compute-0 python3.9[80473]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645874.1183345-124-208765779526420/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=868a8a93fbc124dc6310269f666f3a59af9199f6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:35 compute-0 sudo[80471]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:35 compute-0 sudo[80623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufysnryhztlirxsjzenrtqqywlcdgtqw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645875.5515764-124-237901330424544/AnsiballZ_stat.py'
Jan 05 20:44:35 compute-0 sudo[80623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:36 compute-0 python3.9[80625]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:44:36 compute-0 sudo[80623]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:36 compute-0 sudo[80746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nakjwjlulvrhrvxpbfclbqvejonojqbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645875.5515764-124-237901330424544/AnsiballZ_copy.py'
Jan 05 20:44:36 compute-0 sudo[80746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:36 compute-0 python3.9[80748]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/telemetry/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645875.5515764-124-237901330424544/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=515f817f14bacec448f7d8c4f8a4854b012f624c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:36 compute-0 sudo[80746]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:37 compute-0 chronyd[66040]: Selected source 162.159.200.1 (pool.ntp.org)
Jan 05 20:44:37 compute-0 sudo[80898]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jkfqcsemcxtpndxcopoecjljvswqgtyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645877.0073636-168-41096500738142/AnsiballZ_file.py'
Jan 05 20:44:37 compute-0 sudo[80898]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:37 compute-0 python3.9[80900]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:44:37 compute-0 sudo[80898]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:38 compute-0 sudo[81050]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlnowfynxnbhyeadlnxdcubidnbsbkdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645877.779386-168-36393953396899/AnsiballZ_file.py'
Jan 05 20:44:38 compute-0 sudo[81050]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:38 compute-0 python3.9[81052]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/ovn/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:44:38 compute-0 sudo[81050]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:39 compute-0 sudo[81202]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpsgigpltfqztcjkfrxbubejqpvggztm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645878.6521907-183-246384726574762/AnsiballZ_stat.py'
Jan 05 20:44:39 compute-0 sudo[81202]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:39 compute-0 python3.9[81204]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:44:39 compute-0 sudo[81202]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:39 compute-0 sudo[81325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uxyvfkwnnrrqbzdcehmueqpzazbqdqve ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645878.6521907-183-246384726574762/AnsiballZ_copy.py'
Jan 05 20:44:39 compute-0 sudo[81325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:39 compute-0 python3.9[81327]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645878.6521907-183-246384726574762/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=143615be7351b4891f1d24fefae1261d048d4c68 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:39 compute-0 sudo[81325]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:40 compute-0 sudo[81477]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eddouyyjsvmisxaxkhorxhkoxhsyiazf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645880.038936-183-83018643553631/AnsiballZ_stat.py'
Jan 05 20:44:40 compute-0 sudo[81477]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:40 compute-0 python3.9[81479]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:44:40 compute-0 sudo[81477]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:41 compute-0 sudo[81600]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yioutrzyjcqetzylrssibhoxoxjuyfys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645880.038936-183-83018643553631/AnsiballZ_copy.py'
Jan 05 20:44:41 compute-0 sudo[81600]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:41 compute-0 python3.9[81602]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645880.038936-183-83018643553631/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7176830e8f961c446182b776fcd6e8a26a094236 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:41 compute-0 sudo[81600]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:42 compute-0 sudo[81752]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wzquscivlwanswlsoxuyvxqbpuqymslb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645881.650467-183-250041388439605/AnsiballZ_stat.py'
Jan 05 20:44:42 compute-0 sudo[81752]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:42 compute-0 python3.9[81754]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/ovn/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:44:42 compute-0 sudo[81752]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:42 compute-0 sudo[81875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opllicksxcsrhxunfsgssehsrrpcpwuo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645881.650467-183-250041388439605/AnsiballZ_copy.py'
Jan 05 20:44:42 compute-0 sudo[81875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:42 compute-0 python3.9[81877]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/ovn/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645881.650467-183-250041388439605/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=ce83c633fe4a9263c80b3361c5fd7b8c579bde4f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:42 compute-0 sudo[81875]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:43 compute-0 sudo[82027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiodybgzlhfnulzcylcoknmibkngrsto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645883.107027-227-60252951641663/AnsiballZ_file.py'
Jan 05 20:44:43 compute-0 sudo[82027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:43 compute-0 python3.9[82029]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:44:43 compute-0 sudo[82027]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:44 compute-0 sudo[82179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lalzfwegbxrylfjajavmntgivjyvolfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645883.8729112-227-75353178808591/AnsiballZ_file.py'
Jan 05 20:44:44 compute-0 sudo[82179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:44 compute-0 python3.9[82181]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/libvirt/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:44:44 compute-0 sudo[82179]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:45 compute-0 sudo[82331]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ormitfxdoebfpnzysfjfugyynkfgbnbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645884.7007535-242-159264116380980/AnsiballZ_stat.py'
Jan 05 20:44:45 compute-0 sudo[82331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:45 compute-0 python3.9[82333]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:44:45 compute-0 sudo[82331]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:45 compute-0 sudo[82454]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vaovxqfbmxjumnwjjxnmyjefuosrwgfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645884.7007535-242-159264116380980/AnsiballZ_copy.py'
Jan 05 20:44:45 compute-0 sudo[82454]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:46 compute-0 python3.9[82456]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645884.7007535-242-159264116380980/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=50df1cf915c95a6aa68176455ff68e9a6799ea9d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:46 compute-0 sudo[82454]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:46 compute-0 sudo[82606]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-admlnwgvqrdxeexdabqcrsqeapqpafaq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645886.2846873-242-18109863294974/AnsiballZ_stat.py'
Jan 05 20:44:46 compute-0 sudo[82606]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:46 compute-0 python3.9[82608]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:44:46 compute-0 sudo[82606]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:47 compute-0 sudo[82730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qomeerzdfbgqwhkllujpnqntwcgdnvnv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645886.2846873-242-18109863294974/AnsiballZ_copy.py'
Jan 05 20:44:47 compute-0 sudo[82730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:47 compute-0 python3.9[82732]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645886.2846873-242-18109863294974/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=028779f21f43d2c0ccca98626f2903be62d15107 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:47 compute-0 sudo[82730]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:48 compute-0 sshd-session[77310]: Connection closed by authenticating user root 43.226.60.137 port 39028 [preauth]
Jan 05 20:44:48 compute-0 sudo[82882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckhpahmgkhbdgmggjvtjojqlglexpnix ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645887.7358444-242-31173534422801/AnsiballZ_stat.py'
Jan 05 20:44:48 compute-0 sudo[82882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:48 compute-0 python3.9[82884]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/libvirt/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:44:48 compute-0 sudo[82882]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:48 compute-0 sudo[83005]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iamozqldlimshemxqufhfsrtcshmyila ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645887.7358444-242-31173534422801/AnsiballZ_copy.py'
Jan 05 20:44:48 compute-0 sudo[83005]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:49 compute-0 python3.9[83007]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/libvirt/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645887.7358444-242-31173534422801/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=7dfc94e76ac2f4eccb40e246dc98c7a4147d39e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:49 compute-0 sudo[83005]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:49 compute-0 sudo[83157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rajwanoqjozyfcacwydsugxbbxqknsdf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645889.2886882-286-251872988070288/AnsiballZ_file.py'
Jan 05 20:44:49 compute-0 sudo[83157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:49 compute-0 python3.9[83159]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:44:49 compute-0 sudo[83157]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:50 compute-0 sudo[83309]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kexxbuvzfrwivgzwticpiavpjzfljigj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645890.0179656-286-177814546948265/AnsiballZ_file.py'
Jan 05 20:44:50 compute-0 sudo[83309]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:50 compute-0 python3.9[83311]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/certs/neutron-metadata/default setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:44:50 compute-0 sudo[83309]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:51 compute-0 sudo[83461]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oohkxervlnxfkjzojwdahrqsoucjwmek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645890.886872-301-262453806311656/AnsiballZ_stat.py'
Jan 05 20:44:51 compute-0 sudo[83461]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:51 compute-0 python3.9[83463]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:44:51 compute-0 sudo[83461]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:52 compute-0 sudo[83584]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nywhvhwcijwqezedmdpxbmnvdbtykxof ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645890.886872-301-262453806311656/AnsiballZ_copy.py'
Jan 05 20:44:52 compute-0 sudo[83584]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:52 compute-0 python3.9[83586]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645890.886872-301-262453806311656/.source.crt _original_basename=compute-0.ctlplane.example.com-tls.crt follow=False checksum=db07739ce6c8f36ad05a391470bbc5344bea456b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:52 compute-0 sudo[83584]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:53 compute-0 sudo[83737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kmrgfrgpylkmpiyxsouxcntojdvmeuwi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645892.8876133-301-260078810974453/AnsiballZ_stat.py'
Jan 05 20:44:53 compute-0 sudo[83737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:53 compute-0 python3.9[83739]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/ca.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:44:53 compute-0 sudo[83737]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:54 compute-0 sudo[83860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srhgfzyzntuiklweebchxowoemblimox ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645892.8876133-301-260078810974453/AnsiballZ_copy.py'
Jan 05 20:44:54 compute-0 sudo[83860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:54 compute-0 python3.9[83862]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/ca.crt group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645892.8876133-301-260078810974453/.source.crt _original_basename=compute-0.ctlplane.example.com-ca.crt follow=False checksum=7176830e8f961c446182b776fcd6e8a26a094236 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:54 compute-0 sudo[83860]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:54 compute-0 sudo[84012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zagvvxnnkptxpwlilxgsfxplevugljhi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645894.5437171-301-94264049533636/AnsiballZ_stat.py'
Jan 05 20:44:54 compute-0 sudo[84012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:55 compute-0 python3.9[84014]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/certs/neutron-metadata/default/tls.key follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:44:55 compute-0 sudo[84012]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:55 compute-0 sudo[84135]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-snmwzvpqrqignmjiahbbvnugyrsccsfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645894.5437171-301-94264049533636/AnsiballZ_copy.py'
Jan 05 20:44:55 compute-0 sudo[84135]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:55 compute-0 python3.9[84137]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/certs/neutron-metadata/default/tls.key group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645894.5437171-301-94264049533636/.source.key _original_basename=compute-0.ctlplane.example.com-tls.key follow=False checksum=c99e9b406ce4f9a97478d444f9493cb728f76c73 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:55 compute-0 sudo[84135]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:56 compute-0 sudo[84287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-afeerwsppctcuqkjinwtqienjiforips ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645896.4680202-361-119179890467498/AnsiballZ_file.py'
Jan 05 20:44:56 compute-0 sudo[84287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:57 compute-0 python3.9[84289]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:44:57 compute-0 sudo[84287]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:57 compute-0 sudo[84439]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bvzdgghxrbfjabbwmymuiytewglwytay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645897.225078-369-275582007997852/AnsiballZ_stat.py'
Jan 05 20:44:57 compute-0 sudo[84439]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:57 compute-0 python3.9[84441]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:44:57 compute-0 sudo[84439]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:58 compute-0 sudo[84562]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shuuzyvfcycsabnzmestwikkjucceswq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645897.225078-369-275582007997852/AnsiballZ_copy.py'
Jan 05 20:44:58 compute-0 sudo[84562]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:58 compute-0 python3.9[84564]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645897.225078-369-275582007997852/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=24212b8f56b88835433cd55368c431a44259c040 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:44:58 compute-0 sudo[84562]: pam_unix(sudo:session): session closed for user root
Jan 05 20:44:59 compute-0 sudo[84714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etrgenjyreecuvnejptzzntsrmzeoapb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645898.7927759-385-264257316786668/AnsiballZ_file.py'
Jan 05 20:44:59 compute-0 sudo[84714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:44:59 compute-0 python3.9[84716]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/repo-setup setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:44:59 compute-0 sudo[84714]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:00 compute-0 sudo[84866]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hnelealfqiqxsmvbehtkenmqexinvdde ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645899.6774528-393-32683393358789/AnsiballZ_stat.py'
Jan 05 20:45:00 compute-0 sudo[84866]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:00 compute-0 python3.9[84868]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:45:00 compute-0 sudo[84866]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:00 compute-0 sudo[84989]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmjdfvzvcqvpxtsnkuafpaobkueqrsvn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645899.6774528-393-32683393358789/AnsiballZ_copy.py'
Jan 05 20:45:00 compute-0 sudo[84989]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:00 compute-0 python3.9[84991]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/repo-setup/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645899.6774528-393-32683393358789/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=24212b8f56b88835433cd55368c431a44259c040 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:00 compute-0 sudo[84989]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:01 compute-0 sudo[85141]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nmqwlyrphlteoxkarcyxmdccarkolivx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645901.0599425-409-143034439562988/AnsiballZ_file.py'
Jan 05 20:45:01 compute-0 sudo[85141]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:01 compute-0 python3.9[85143]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:45:01 compute-0 sudo[85141]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:02 compute-0 sudo[85293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upivgqystloehlypsijwtifelluhfplg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645902.0905306-417-48983053553664/AnsiballZ_stat.py'
Jan 05 20:45:02 compute-0 sudo[85293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:02 compute-0 python3.9[85295]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:45:02 compute-0 sudo[85293]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:03 compute-0 sudo[85416]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfixtccoosjkaxfpjmlrqekpmdwxmqao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645902.0905306-417-48983053553664/AnsiballZ_copy.py'
Jan 05 20:45:03 compute-0 sudo[85416]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:03 compute-0 python3.9[85418]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/libvirt/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645902.0905306-417-48983053553664/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=24212b8f56b88835433cd55368c431a44259c040 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:03 compute-0 sudo[85416]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:04 compute-0 sudo[85568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwbhafmhqacbzqswwkxiwtrtsmkqbukd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645903.7681167-433-107874716353580/AnsiballZ_file.py'
Jan 05 20:45:04 compute-0 sudo[85568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:04 compute-0 python3.9[85570]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:45:04 compute-0 sudo[85568]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:04 compute-0 sudo[85720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-soxumersrdrltcoyoctmtghgmuioebee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645904.5460372-441-7685518806486/AnsiballZ_stat.py'
Jan 05 20:45:04 compute-0 sudo[85720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:05 compute-0 python3.9[85722]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:45:05 compute-0 sudo[85720]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:05 compute-0 sudo[85843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbnkkyhjwphofzlewszmgxcsygthrjdn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645904.5460372-441-7685518806486/AnsiballZ_copy.py'
Jan 05 20:45:05 compute-0 sudo[85843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:05 compute-0 python3.9[85845]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645904.5460372-441-7685518806486/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=24212b8f56b88835433cd55368c431a44259c040 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:05 compute-0 sudo[85843]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:06 compute-0 sudo[85995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tssdmjvykjqlpkwdnrwwrjbfmjkqbdvu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645905.9920766-457-130398420768730/AnsiballZ_file.py'
Jan 05 20:45:06 compute-0 sudo[85995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:06 compute-0 python3.9[85997]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:45:06 compute-0 sudo[85995]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:07 compute-0 sudo[86147]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ywryiexnsgvzabsjltoquluuhllitymc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645906.9186661-465-200640728446581/AnsiballZ_stat.py'
Jan 05 20:45:07 compute-0 sudo[86147]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:07 compute-0 python3.9[86149]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:45:07 compute-0 sudo[86147]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:08 compute-0 sudo[86270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rejyofxoaipbctcqnwdvbampaibnozal ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645906.9186661-465-200640728446581/AnsiballZ_copy.py'
Jan 05 20:45:08 compute-0 sudo[86270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:08 compute-0 python3.9[86272]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645906.9186661-465-200640728446581/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=24212b8f56b88835433cd55368c431a44259c040 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:08 compute-0 sudo[86270]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:08 compute-0 sudo[86422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gffoohrviczypyszzskhwexyuyxlcrtw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645908.5061598-481-14763846849305/AnsiballZ_file.py'
Jan 05 20:45:08 compute-0 sudo[86422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:09 compute-0 python3.9[86424]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/neutron-metadata setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:45:09 compute-0 sudo[86422]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:09 compute-0 sudo[86574]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvnxopezzhfzrllsfvsfwuudquefbomd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645909.2310717-489-197797406348864/AnsiballZ_stat.py'
Jan 05 20:45:09 compute-0 sudo[86574]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:09 compute-0 python3.9[86576]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:45:09 compute-0 sudo[86574]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:10 compute-0 sudo[86697]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ldfbtvnwxliybbwxskwmzougqymjelxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645909.2310717-489-197797406348864/AnsiballZ_copy.py'
Jan 05 20:45:10 compute-0 sudo[86697]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:10 compute-0 python3.9[86699]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645909.2310717-489-197797406348864/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=24212b8f56b88835433cd55368c431a44259c040 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:10 compute-0 sudo[86697]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:11 compute-0 sudo[86849]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aecjpgrrwhfancuwbqpvvxgyvrzocesp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645910.787095-505-153306715469339/AnsiballZ_file.py'
Jan 05 20:45:11 compute-0 sudo[86849]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:11 compute-0 python3.9[86851]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/bootstrap setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:45:11 compute-0 sudo[86849]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:12 compute-0 sudo[87001]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynzdglsixchzxwipqgbqtnkwhtojdsga ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645911.7659059-513-237851229480064/AnsiballZ_stat.py'
Jan 05 20:45:12 compute-0 sudo[87001]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:12 compute-0 python3.9[87003]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:45:12 compute-0 sudo[87001]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:12 compute-0 sudo[87124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dtznguoxkbbvlglpplxsqsbvjmjaoluc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645911.7659059-513-237851229480064/AnsiballZ_copy.py'
Jan 05 20:45:12 compute-0 sudo[87124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:13 compute-0 python3.9[87126]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/bootstrap/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645911.7659059-513-237851229480064/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=24212b8f56b88835433cd55368c431a44259c040 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:13 compute-0 sudo[87124]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:13 compute-0 sudo[87276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lukamnynpxqfktkfddvaryqxlyrrqukg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645913.3615153-529-108033508556831/AnsiballZ_file.py'
Jan 05 20:45:13 compute-0 sudo[87276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:13 compute-0 python3.9[87278]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/var/lib/openstack/cacerts/telemetry-power-monitoring setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:45:13 compute-0 sudo[87276]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:14 compute-0 sudo[87428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ijdujpvbgoxrsierwjdiwatzrrukyqya ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645914.1606073-537-257388598443850/AnsiballZ_stat.py'
Jan 05 20:45:14 compute-0 sudo[87428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:14 compute-0 python3.9[87430]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:45:14 compute-0 sudo[87428]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:15 compute-0 sudo[87551]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuiskznthgfuuudrgfcycpklthmxjtcf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645914.1606073-537-257388598443850/AnsiballZ_copy.py'
Jan 05 20:45:15 compute-0 sudo[87551]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:15 compute-0 python3.9[87553]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645914.1606073-537-257388598443850/.source.pem _original_basename=tls-ca-bundle.pem follow=False checksum=24212b8f56b88835433cd55368c431a44259c040 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:15 compute-0 sudo[87551]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:15 compute-0 sshd-session[78336]: Connection closed by 192.168.122.30 port 53644
Jan 05 20:45:15 compute-0 sshd-session[78333]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:45:15 compute-0 systemd[1]: session-18.scope: Deactivated successfully.
Jan 05 20:45:15 compute-0 systemd[1]: session-18.scope: Consumed 42.364s CPU time.
Jan 05 20:45:15 compute-0 systemd-logind[788]: Session 18 logged out. Waiting for processes to exit.
Jan 05 20:45:15 compute-0 systemd-logind[788]: Removed session 18.
Jan 05 20:45:19 compute-0 sshd-session[87578]: Connection closed by authenticating user root 43.226.60.137 port 55606 [preauth]
Jan 05 20:45:21 compute-0 sshd-session[87580]: Accepted publickey for zuul from 192.168.122.30 port 52912 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:45:21 compute-0 systemd-logind[788]: New session 19 of user zuul.
Jan 05 20:45:21 compute-0 systemd[1]: Started Session 19 of User zuul.
Jan 05 20:45:21 compute-0 sshd-session[87580]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:45:23 compute-0 python3.9[87733]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:45:24 compute-0 sudo[87887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjcrfhsgfoupcrcgziwnxttrlipmyeyl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645923.6268249-34-234554695807621/AnsiballZ_file.py'
Jan 05 20:45:24 compute-0 sudo[87887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:24 compute-0 python3.9[87889]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:45:24 compute-0 sudo[87887]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:24 compute-0 sudo[88039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kpphgsgsysrzbeeietbfjtibycpgkorl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645924.6178744-34-22091670655443/AnsiballZ_file.py'
Jan 05 20:45:24 compute-0 sudo[88039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:25 compute-0 python3.9[88041]: ansible-ansible.builtin.file Invoked with group=openvswitch owner=openvswitch path=/var/lib/openvswitch/ovn setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:45:25 compute-0 sudo[88039]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:26 compute-0 python3.9[88191]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:45:26 compute-0 sudo[88341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wpcslyfgvhtkohahxntwqyunaaxlpyjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645926.3664653-57-48948235779262/AnsiballZ_seboolean.py'
Jan 05 20:45:26 compute-0 sudo[88341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:27 compute-0 python3.9[88343]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 05 20:45:28 compute-0 sudo[88341]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:29 compute-0 sudo[88497]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjgbcubbzbcyzaztkzmxkxricmkabesd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645928.6055675-67-52131753867538/AnsiballZ_setup.py'
Jan 05 20:45:29 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=9 res=1
Jan 05 20:45:29 compute-0 sudo[88497]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:29 compute-0 python3.9[88499]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 20:45:29 compute-0 sudo[88497]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:30 compute-0 sudo[88581]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iikekagupixtheyacpjqpetszskfghqr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645928.6055675-67-52131753867538/AnsiballZ_dnf.py'
Jan 05 20:45:30 compute-0 sudo[88581]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:30 compute-0 python3.9[88583]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:45:31 compute-0 sudo[88581]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:32 compute-0 sudo[88734]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qatmdstgrlqeegjpzlpntturjjozfxfg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645931.8684595-79-119252032765257/AnsiballZ_systemd.py'
Jan 05 20:45:32 compute-0 sudo[88734]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:32 compute-0 python3.9[88736]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 05 20:45:32 compute-0 sudo[88734]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:33 compute-0 sudo[88889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-knqjywcfofloixgrijuhkmrykubiojlo ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767645933.2190413-87-193035309634160/AnsiballZ_edpm_nftables_snippet.py'
Jan 05 20:45:33 compute-0 sudo[88889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:33 compute-0 python3[88891]: ansible-osp.edpm.edpm_nftables_snippet Invoked with content=- rule_name: 118 neutron vxlan networks
                                            rule:
                                              proto: udp
                                              dport: 4789
                                          - rule_name: 119 neutron geneve networks
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              state: ["UNTRACKED"]
                                          - rule_name: 120 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: OUTPUT
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                          - rule_name: 121 neutron geneve networks no conntrack
                                            rule:
                                              proto: udp
                                              dport: 6081
                                              table: raw
                                              chain: PREROUTING
                                              jump: NOTRACK
                                              action: append
                                              state: []
                                           dest=/var/lib/edpm-config/firewall/ovn.yaml state=present
Jan 05 20:45:33 compute-0 sudo[88889]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:34 compute-0 sudo[89041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-romfdcwupuangrcljfonpdlegftwovtj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645934.2579963-96-134170431892139/AnsiballZ_file.py'
Jan 05 20:45:34 compute-0 sudo[89041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:34 compute-0 python3.9[89043]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:34 compute-0 sudo[89041]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:35 compute-0 sudo[89193]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbfwyabbwolpmjenjcquxflcqksbmvud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645935.0890517-104-231555984743727/AnsiballZ_stat.py'
Jan 05 20:45:35 compute-0 sudo[89193]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:35 compute-0 python3.9[89195]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:45:35 compute-0 sudo[89193]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:36 compute-0 sudo[89271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfjtgkqjaalixmhjsmyyzeimeafvueyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645935.0890517-104-231555984743727/AnsiballZ_file.py'
Jan 05 20:45:36 compute-0 sudo[89271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:36 compute-0 python3.9[89273]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:36 compute-0 sudo[89271]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:37 compute-0 sudo[89423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eqbdsxgvyqsfjrasbdinfxeuuuuejddd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645936.6271975-116-202122640485483/AnsiballZ_stat.py'
Jan 05 20:45:37 compute-0 sudo[89423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:37 compute-0 python3.9[89425]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:45:37 compute-0 sudo[89423]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:37 compute-0 sudo[89501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fvbxzuqfomhtiiwwhiiebakwennalawk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645936.6271975-116-202122640485483/AnsiballZ_file.py'
Jan 05 20:45:37 compute-0 sudo[89501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:37 compute-0 python3.9[89503]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.9ehg_8fm recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:37 compute-0 sudo[89501]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:38 compute-0 sudo[89653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oypjujehffdvnxfykxsbamrcrrhfuslv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645938.1280165-128-81485821658461/AnsiballZ_stat.py'
Jan 05 20:45:38 compute-0 sudo[89653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:38 compute-0 python3.9[89655]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:45:38 compute-0 sudo[89653]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:39 compute-0 sudo[89731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iicuneggufhhrzpisdgadftbkgvhjxex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645938.1280165-128-81485821658461/AnsiballZ_file.py'
Jan 05 20:45:39 compute-0 sudo[89731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:39 compute-0 python3.9[89733]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:39 compute-0 sudo[89731]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:40 compute-0 sudo[89883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kqvvoidemhshxymlqwnbkkhdrjnnzvfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645939.5531163-141-47878065801007/AnsiballZ_command.py'
Jan 05 20:45:40 compute-0 sudo[89883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:40 compute-0 python3.9[89885]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:45:40 compute-0 sudo[89883]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:41 compute-0 sudo[90036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cojrnwembfzprijwmzunrvjvrpnpvshw ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767645940.5523813-149-49472096867009/AnsiballZ_edpm_nftables_from_files.py'
Jan 05 20:45:41 compute-0 sudo[90036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:41 compute-0 python3[90038]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 05 20:45:41 compute-0 sudo[90036]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:41 compute-0 sudo[90188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-usxwohwbzxqortwsxdhjalgpipaifjiv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645941.483909-157-91545763112200/AnsiballZ_stat.py'
Jan 05 20:45:41 compute-0 sudo[90188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:42 compute-0 python3.9[90190]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:45:42 compute-0 sudo[90188]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:42 compute-0 sudo[90313]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfejdbiefomcgpxoymcxqjcgxfxdjnzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645941.483909-157-91545763112200/AnsiballZ_copy.py'
Jan 05 20:45:42 compute-0 sudo[90313]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:42 compute-0 python3.9[90315]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645941.483909-157-91545763112200/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:42 compute-0 sudo[90313]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:43 compute-0 sudo[90465]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wbgeghevrvcnrvizjqcjbrkghjyrhofp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645943.0796382-172-278752890908634/AnsiballZ_stat.py'
Jan 05 20:45:43 compute-0 sudo[90465]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:43 compute-0 python3.9[90467]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:45:43 compute-0 sudo[90465]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:44 compute-0 sudo[90590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ctywtwbjddtltylobzbqnjbbvbkcayuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645943.0796382-172-278752890908634/AnsiballZ_copy.py'
Jan 05 20:45:44 compute-0 sudo[90590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:44 compute-0 python3.9[90592]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-update-jumps.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645943.0796382-172-278752890908634/.source.nft follow=False _original_basename=jump-chain.j2 checksum=81c2fc96c23335ffe374f9b064e885d5d971ddf9 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:44 compute-0 sudo[90590]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:44 compute-0 sudo[90742]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jlppwdhzwfkjtiuyhsxyiisedoiltiog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645944.5446534-187-114976944616940/AnsiballZ_stat.py'
Jan 05 20:45:44 compute-0 sudo[90742]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:45 compute-0 python3.9[90744]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:45:45 compute-0 sudo[90742]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:45 compute-0 sudo[90867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mnpmqcophvbsniilgzoezmroixukzigw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645944.5446534-187-114976944616940/AnsiballZ_copy.py'
Jan 05 20:45:45 compute-0 sudo[90867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:45 compute-0 python3.9[90869]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-flushes.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645944.5446534-187-114976944616940/.source.nft follow=False _original_basename=flush-chain.j2 checksum=4d3ffec49c8eb1a9b80d2f1e8cd64070063a87b4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:45 compute-0 sudo[90867]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:46 compute-0 sudo[91019]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmuooojvjnkmcaulizuzifpyiascogfc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645946.0853431-202-179294877969097/AnsiballZ_stat.py'
Jan 05 20:45:46 compute-0 sudo[91019]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:46 compute-0 python3.9[91021]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:45:46 compute-0 sudo[91019]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:47 compute-0 sudo[91144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvktccaysfwfnrretcfqyvfpfkypzrgx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645946.0853431-202-179294877969097/AnsiballZ_copy.py'
Jan 05 20:45:47 compute-0 sudo[91144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:47 compute-0 python3.9[91146]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-chains.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645946.0853431-202-179294877969097/.source.nft follow=False _original_basename=chains.j2 checksum=298ada419730ec15df17ded0cc50c97a4014a591 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:47 compute-0 sudo[91144]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:48 compute-0 sudo[91296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yrjhxhmkjwcfetezfdscabbpycpqvint ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645947.705496-217-99891538849940/AnsiballZ_stat.py'
Jan 05 20:45:48 compute-0 sudo[91296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:48 compute-0 python3.9[91298]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:45:48 compute-0 sudo[91296]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:48 compute-0 sudo[91421]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fiabgslkfnwihkoqykzrvhjamautiaxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645947.705496-217-99891538849940/AnsiballZ_copy.py'
Jan 05 20:45:48 compute-0 sudo[91421]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:49 compute-0 python3.9[91423]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767645947.705496-217-99891538849940/.source.nft follow=False _original_basename=ruleset.j2 checksum=eb691bdb7d792c5f8ff0d719e807fe1c95b09438 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:49 compute-0 sudo[91421]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:49 compute-0 sudo[91573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eioczosrqcgghzaddjmglzfnusqqdcbd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645949.2837079-232-266291585613850/AnsiballZ_file.py'
Jan 05 20:45:49 compute-0 sudo[91573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:49 compute-0 python3.9[91575]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:49 compute-0 sudo[91573]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:50 compute-0 sudo[91727]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdhhlmgrderstckagrsbzmibcdzqvaxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645950.108879-240-182436349057565/AnsiballZ_command.py'
Jan 05 20:45:50 compute-0 sudo[91727]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:50 compute-0 python3.9[91729]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:45:50 compute-0 sudo[91727]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:51 compute-0 sudo[91882]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olvaxakbucknwmletawlzyqlreqdrvhq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645951.0069146-248-176680101023405/AnsiballZ_blockinfile.py'
Jan 05 20:45:51 compute-0 sudo[91882]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:52 compute-0 python3.9[91884]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                            include "/etc/nftables/edpm-chains.nft"
                                            include "/etc/nftables/edpm-rules.nft"
                                            include "/etc/nftables/edpm-jumps.nft"
                                             path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:52 compute-0 sudo[91882]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:52 compute-0 sshd-session[91652]: Connection closed by authenticating user root 43.226.60.137 port 51506 [preauth]
Jan 05 20:45:52 compute-0 sudo[92034]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcawbpplehsmeegfitcgtyrzecxzjawy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645952.3187327-257-216609086317726/AnsiballZ_command.py'
Jan 05 20:45:52 compute-0 sudo[92034]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:52 compute-0 python3.9[92036]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:45:52 compute-0 sudo[92034]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:53 compute-0 sudo[92187]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-frfyiemcbbdbrpnmyfvgulnrthcffkka ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645953.1726906-265-185391143690668/AnsiballZ_stat.py'
Jan 05 20:45:53 compute-0 sudo[92187]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:53 compute-0 python3.9[92189]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:45:53 compute-0 sudo[92187]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:54 compute-0 sudo[92341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcqxlhsfnhlwzmrobhyhwxutemzcimhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645954.038249-273-18960512064786/AnsiballZ_command.py'
Jan 05 20:45:54 compute-0 sudo[92341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:54 compute-0 python3.9[92343]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:45:54 compute-0 sudo[92341]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:55 compute-0 sudo[92496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vstleaguqrsoguvxzvjourtjynohlpyz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645954.9971194-281-229991806453111/AnsiballZ_file.py'
Jan 05 20:45:55 compute-0 sudo[92496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:55 compute-0 python3.9[92498]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:45:55 compute-0 sudo[92496]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:56 compute-0 python3.9[92648]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'machine'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:45:58 compute-0 sudo[92799]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-arohdxnqqibypcvcxvveqjdudwtznafm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645957.6145217-321-134277302081258/AnsiballZ_command.py'
Jan 05 20:45:58 compute-0 sudo[92799]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:58 compute-0 python3.9[92801]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings="datacentre:3e:0a:86:5c:f9:a2" external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch 
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:45:58 compute-0 ovs-vsctl[92802]: ovs|00001|vsctl|INFO|Called as ovs-vsctl set open . external_ids:hostname=compute-0.ctlplane.example.com external_ids:ovn-bridge=br-int external_ids:ovn-bridge-mappings=datacentre:br-ex external_ids:ovn-chassis-mac-mappings=datacentre:3e:0a:86:5c:f9:a2 external_ids:ovn-encap-ip=172.19.0.100 external_ids:ovn-encap-type=geneve external_ids:ovn-encap-tos=0 external_ids:ovn-match-northd-version=False external_ids:ovn-monitor-all=True external_ids:ovn-remote=ssl:ovsdbserver-sb.openstack.svc:6642 external_ids:ovn-remote-probe-interval=60000 external_ids:ovn-ofctrl-wait-before-clear=8000 external_ids:rundir=/var/run/openvswitch
Jan 05 20:45:58 compute-0 sudo[92799]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:58 compute-0 sudo[92952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjryzkjsmyphdnwzacucrdawcnzlznwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645958.5481207-330-1759317380463/AnsiballZ_command.py'
Jan 05 20:45:58 compute-0 sudo[92952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:59 compute-0 python3.9[92954]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                            ovs-vsctl show | grep -q "Manager"
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:45:59 compute-0 sudo[92952]: pam_unix(sudo:session): session closed for user root
Jan 05 20:45:59 compute-0 sudo[93107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bopmaedfhlukqclpcftfveenvphciivy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645959.4087372-338-200300801921061/AnsiballZ_command.py'
Jan 05 20:45:59 compute-0 sudo[93107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:45:59 compute-0 python3.9[93109]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl --timeout=5 --id=@manager -- create Manager target=\"ptcp:********@manager
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:45:59 compute-0 ovs-vsctl[93110]: ovs|00001|vsctl|INFO|Called as ovs-vsctl --timeout=5 --id=@manager -- create Manager "target=\"ptcp:6640:127.0.0.1\"" -- add Open_vSwitch . manager_options @manager
Jan 05 20:45:59 compute-0 sudo[93107]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:00 compute-0 python3.9[93260]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:46:01 compute-0 sudo[93412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htzuvtorqmqmsarttqijytamriavypxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645961.0240812-355-45674047759955/AnsiballZ_file.py'
Jan 05 20:46:01 compute-0 sudo[93412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:01 compute-0 python3.9[93414]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:46:01 compute-0 sudo[93412]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:02 compute-0 sudo[93564]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzpzufvkqhkfaplwabuwhnymvqsnqred ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645962.1133704-363-59241485668820/AnsiballZ_stat.py'
Jan 05 20:46:02 compute-0 sudo[93564]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:02 compute-0 python3.9[93566]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:46:02 compute-0 sudo[93564]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:03 compute-0 sudo[93642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrdfshnxszrltjjponolbshwcqxkroik ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645962.1133704-363-59241485668820/AnsiballZ_file.py'
Jan 05 20:46:03 compute-0 sudo[93642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:03 compute-0 python3.9[93644]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:46:03 compute-0 sudo[93642]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:03 compute-0 sudo[93794]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rvtslnywhtdwiiqnpzjoypryljqfivan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645963.5056999-363-20594061075192/AnsiballZ_stat.py'
Jan 05 20:46:03 compute-0 sudo[93794]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:04 compute-0 python3.9[93796]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:46:04 compute-0 sudo[93794]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:04 compute-0 sudo[93872]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avwzxshflrogxzplxbevvncltruyjnbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645963.5056999-363-20594061075192/AnsiballZ_file.py'
Jan 05 20:46:04 compute-0 sudo[93872]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:04 compute-0 python3.9[93874]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:46:04 compute-0 sudo[93872]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:05 compute-0 sudo[94024]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kthqyjpneclibmmgtiltjqndcxlkvclt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645964.8690443-386-174347162715576/AnsiballZ_file.py'
Jan 05 20:46:05 compute-0 sudo[94024]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:05 compute-0 python3.9[94026]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:46:05 compute-0 sudo[94024]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:06 compute-0 sudo[94176]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzlsxgekieqmldkljotnghttkdcwqhty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645965.6259136-394-239105316543343/AnsiballZ_stat.py'
Jan 05 20:46:06 compute-0 sudo[94176]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:06 compute-0 python3.9[94178]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:46:06 compute-0 sudo[94176]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:06 compute-0 sudo[94254]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfsxwslxfraujebdqszqwvxuhhlizsir ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645965.6259136-394-239105316543343/AnsiballZ_file.py'
Jan 05 20:46:06 compute-0 sudo[94254]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:06 compute-0 python3.9[94256]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:46:06 compute-0 sudo[94254]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:07 compute-0 sudo[94406]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxpaxfzxtlvvqdfkvzjkmikrcikdrvbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645967.0176368-406-248084318056342/AnsiballZ_stat.py'
Jan 05 20:46:07 compute-0 sudo[94406]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:07 compute-0 python3.9[94408]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:46:07 compute-0 sudo[94406]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:07 compute-0 sudo[94484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bgeoosqrfwhogmhqbhydndllkpeuhgif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645967.0176368-406-248084318056342/AnsiballZ_file.py'
Jan 05 20:46:07 compute-0 sudo[94484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:08 compute-0 python3.9[94486]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:46:08 compute-0 sudo[94484]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:08 compute-0 sudo[94636]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-molhushlfxmcjkqspwuokkozavvwsyux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645968.3398173-418-43369814327065/AnsiballZ_systemd.py'
Jan 05 20:46:08 compute-0 sudo[94636]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:09 compute-0 python3.9[94638]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:46:09 compute-0 systemd[1]: Reloading.
Jan 05 20:46:09 compute-0 systemd-rc-local-generator[94666]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:46:09 compute-0 systemd-sysv-generator[94670]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:46:09 compute-0 sudo[94636]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:10 compute-0 sudo[94825]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lknuevqhqfntzwljmsysdfdqmrytqdgw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645969.8565419-426-43606391410185/AnsiballZ_stat.py'
Jan 05 20:46:10 compute-0 sudo[94825]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:10 compute-0 python3.9[94827]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:46:10 compute-0 sudo[94825]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:10 compute-0 sudo[94903]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vawuvshxunpchpoxtbservzmqiylpsnx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645969.8565419-426-43606391410185/AnsiballZ_file.py'
Jan 05 20:46:10 compute-0 sudo[94903]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:10 compute-0 python3.9[94905]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:46:10 compute-0 sudo[94903]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:11 compute-0 sudo[95055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zqtvvgwzhggvufosjikfrpfkwljjutgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645971.1452656-438-127361660808300/AnsiballZ_stat.py'
Jan 05 20:46:11 compute-0 sudo[95055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:11 compute-0 python3.9[95057]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:46:11 compute-0 sudo[95055]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:12 compute-0 sudo[95133]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtfymwdskroyeadlytgzwxjoevkgkoea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645971.1452656-438-127361660808300/AnsiballZ_file.py'
Jan 05 20:46:12 compute-0 sudo[95133]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:12 compute-0 python3.9[95135]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:46:12 compute-0 sudo[95133]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:12 compute-0 sudo[95285]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scicnjgipigzjaznfynzqvzfmsroltyg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645972.5047922-450-176601024349085/AnsiballZ_systemd.py'
Jan 05 20:46:12 compute-0 sudo[95285]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:13 compute-0 python3.9[95287]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:46:13 compute-0 systemd[1]: Reloading.
Jan 05 20:46:13 compute-0 systemd-rc-local-generator[95311]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:46:13 compute-0 systemd-sysv-generator[95314]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:46:13 compute-0 systemd[1]: Starting Create netns directory...
Jan 05 20:46:13 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 05 20:46:13 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 05 20:46:13 compute-0 systemd[1]: Finished Create netns directory.
Jan 05 20:46:13 compute-0 sudo[95285]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:14 compute-0 sudo[95479]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-slerkyczputfpyipvwmwcwfigtlkzmce ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645973.9770572-460-272306055232791/AnsiballZ_file.py'
Jan 05 20:46:14 compute-0 sudo[95479]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:14 compute-0 python3.9[95481]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:46:14 compute-0 sudo[95479]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:15 compute-0 sudo[95631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vnsqazxksolzwwyjbeogbqafattyhcei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645974.7558346-468-176504507570870/AnsiballZ_stat.py'
Jan 05 20:46:15 compute-0 sudo[95631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:15 compute-0 python3.9[95633]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_controller/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:46:15 compute-0 sudo[95631]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:15 compute-0 sudo[95754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-doqzwfnsdddizzhkysolovrwtahttnmj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645974.7558346-468-176504507570870/AnsiballZ_copy.py'
Jan 05 20:46:15 compute-0 sudo[95754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:15 compute-0 python3.9[95756]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_controller/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767645974.7558346-468-176504507570870/.source _original_basename=healthcheck follow=False checksum=4098dd010265fabdf5c26b97d169fc4e575ff457 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:46:15 compute-0 sudo[95754]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:16 compute-0 sudo[95906]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcqpdccmnjaojvpxzyuxvnuldtcevjyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645976.3595867-485-256838806405684/AnsiballZ_file.py'
Jan 05 20:46:16 compute-0 sudo[95906]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:16 compute-0 python3.9[95908]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:46:16 compute-0 sudo[95906]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:17 compute-0 sudo[96058]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzydcbvsbhzebmuwzbdzecyndlpjcoje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645977.1673474-493-137350392207705/AnsiballZ_file.py'
Jan 05 20:46:17 compute-0 sudo[96058]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:17 compute-0 python3.9[96060]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:46:17 compute-0 sudo[96058]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:18 compute-0 sudo[96210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-irfippzqkgwltubrahtlpxeedrwoyujg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645978.0678508-501-217533996372896/AnsiballZ_stat.py'
Jan 05 20:46:18 compute-0 sudo[96210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:18 compute-0 python3.9[96212]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_controller.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:46:18 compute-0 sudo[96210]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:19 compute-0 sudo[96333]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwdsikbbnrwmiczfsegfwwakbtetzvwb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645978.0678508-501-217533996372896/AnsiballZ_copy.py'
Jan 05 20:46:19 compute-0 sudo[96333]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:19 compute-0 python3.9[96335]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_controller.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1767645978.0678508-501-217533996372896/.source.json _original_basename=.egfhke9d follow=False checksum=2328fc98619beeb08ee32b01f15bb43094c10b61 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:46:19 compute-0 sudo[96333]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:20 compute-0 python3.9[96485]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_controller state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:46:22 compute-0 sudo[96907]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avyavvbogidenrrmkemggplbxqhhqbbn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645982.122384-541-164698970486730/AnsiballZ_container_config_data.py'
Jan 05 20:46:22 compute-0 sudo[96907]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:22 compute-0 python3.9[96909]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_controller config_pattern=*.json debug=False
Jan 05 20:46:22 compute-0 sudo[96907]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:23 compute-0 sudo[97060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ayluofluvvyzguagbkrueuulknxjwefi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645983.262757-552-216924903153827/AnsiballZ_container_config_hash.py'
Jan 05 20:46:23 compute-0 sudo[97060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:23 compute-0 sshd-session[96889]: Connection closed by authenticating user root 43.226.60.137 port 51052 [preauth]
Jan 05 20:46:24 compute-0 python3.9[97062]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 20:46:24 compute-0 sudo[97060]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:24 compute-0 sudo[97212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scnxjbvblpldycgtejqbdyaelksrnolu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645984.4094617-561-176231196489994/AnsiballZ_podman_container_info.py'
Jan 05 20:46:24 compute-0 sudo[97212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:25 compute-0 python3.9[97214]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 05 20:46:25 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:46:25 compute-0 sudo[97212]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:26 compute-0 sudo[97374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsofbbtzbkjteiokpxbulvopkbqpqgbg ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767645985.7861083-574-108384143260753/AnsiballZ_edpm_container_manage.py'
Jan 05 20:46:26 compute-0 sudo[97374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:26 compute-0 python3[97376]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_controller config_id=ovn_controller config_overrides={} config_patterns=*.json containers=['ovn_controller'] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 20:46:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:46:26 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:46:26 compute-0 podman[97412]: 2026-01-05 20:46:26.881402536 +0000 UTC m=+0.074645893 container create 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 20:46:26 compute-0 podman[97412]: 2026-01-05 20:46:26.844928535 +0000 UTC m=+0.038171942 image pull a17927617ef5a603f0594ee0d6df65aabdc9e0303ccc5a52c36f193de33ee0fe quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 05 20:46:26 compute-0 python3[97376]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_controller --conmon-pidfile /run/ovn_controller.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd --healthcheck-command /openstack/healthcheck --label config_id=ovn_controller --label container_name=ovn_controller --label managed_by=edpm_ansible --label config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --user root --volume /lib/modules:/lib/modules:ro --volume /run:/run --volume /var/lib/openvswitch/ovn:/run/ovn:shared,z --volume /var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified
Jan 05 20:46:27 compute-0 sudo[97374]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:27 compute-0 sudo[97602]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qduhmnhhpsuqwrdyludmnaynjrykbwcv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645987.3426073-582-79985742549116/AnsiballZ_stat.py'
Jan 05 20:46:27 compute-0 sudo[97602]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:27 compute-0 systemd[1]: var-lib-containers-storage-overlay.mount: Deactivated successfully.
Jan 05 20:46:27 compute-0 python3.9[97604]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:46:27 compute-0 sudo[97602]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:28 compute-0 sudo[97756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wjszjyjwaasmgjiypeflsqiiivtenxaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645988.2075863-591-164545260226410/AnsiballZ_file.py'
Jan 05 20:46:28 compute-0 sudo[97756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:28 compute-0 python3.9[97758]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_controller.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:46:28 compute-0 sudo[97756]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:29 compute-0 sudo[97832]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xyvghgmpnntxcikfdjatvmoaycyisomy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645988.2075863-591-164545260226410/AnsiballZ_stat.py'
Jan 05 20:46:29 compute-0 sudo[97832]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:29 compute-0 python3.9[97834]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_controller_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:46:29 compute-0 sudo[97832]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:29 compute-0 sudo[97983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nlhbgchujlokvwuotyuqphicafojdmip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645989.4407141-591-159323612091/AnsiballZ_copy.py'
Jan 05 20:46:29 compute-0 sudo[97983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:30 compute-0 python3.9[97985]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767645989.4407141-591-159323612091/source dest=/etc/systemd/system/edpm_ovn_controller.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:46:30 compute-0 sudo[97983]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:30 compute-0 sudo[98059]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yddjzhwvkgcgmdzbpknlkjrnpucjbwcs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645989.4407141-591-159323612091/AnsiballZ_systemd.py'
Jan 05 20:46:30 compute-0 sudo[98059]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:30 compute-0 python3.9[98061]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 20:46:30 compute-0 systemd[1]: Reloading.
Jan 05 20:46:30 compute-0 systemd-sysv-generator[98093]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:46:30 compute-0 systemd-rc-local-generator[98090]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:46:31 compute-0 sudo[98059]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:31 compute-0 sudo[98171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ejyqnabqbcnugnsjllcdwodciywfwovx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645989.4407141-591-159323612091/AnsiballZ_systemd.py'
Jan 05 20:46:31 compute-0 sudo[98171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:31 compute-0 python3.9[98173]: ansible-systemd Invoked with state=restarted name=edpm_ovn_controller.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:46:31 compute-0 systemd[1]: Reloading.
Jan 05 20:46:31 compute-0 systemd-sysv-generator[98208]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:46:31 compute-0 systemd-rc-local-generator[98204]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:46:32 compute-0 systemd[1]: Starting ovn_controller container...
Jan 05 20:46:32 compute-0 systemd[1]: Created slice Virtual Machine and Container Slice.
Jan 05 20:46:32 compute-0 systemd[1]: Started libcrun container.
Jan 05 20:46:32 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a6537bc3b1d1dbfcdc5e6677ce803c9ecfc15cfa358d098be5d83fd819e96d/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 05 20:46:32 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4.
Jan 05 20:46:32 compute-0 podman[98214]: 2026-01-05 20:46:32.309404894 +0000 UTC m=+0.168554096 container init 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 05 20:46:32 compute-0 ovn_controller[98229]: + sudo -E kolla_set_configs
Jan 05 20:46:32 compute-0 podman[98214]: 2026-01-05 20:46:32.341683783 +0000 UTC m=+0.200832935 container start 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 20:46:32 compute-0 edpm-start-podman-container[98214]: ovn_controller
Jan 05 20:46:32 compute-0 systemd[1]: Created slice User Slice of UID 0.
Jan 05 20:46:32 compute-0 systemd[1]: Starting User Runtime Directory /run/user/0...
Jan 05 20:46:32 compute-0 systemd[1]: Finished User Runtime Directory /run/user/0.
Jan 05 20:46:32 compute-0 systemd[1]: Starting User Manager for UID 0...
Jan 05 20:46:32 compute-0 systemd[98264]: pam_unix(systemd-user:session): session opened for user root(uid=0) by root(uid=0)
Jan 05 20:46:32 compute-0 edpm-start-podman-container[98213]: Creating additional drop-in dependency for "ovn_controller" (8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4)
Jan 05 20:46:32 compute-0 podman[98235]: 2026-01-05 20:46:32.446109581 +0000 UTC m=+0.081937795 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, config_id=ovn_controller, managed_by=edpm_ansible)
Jan 05 20:46:32 compute-0 systemd[1]: 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4-33cc8b1fa85b9e50.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 20:46:32 compute-0 systemd[1]: 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4-33cc8b1fa85b9e50.service: Failed with result 'exit-code'.
Jan 05 20:46:32 compute-0 systemd[1]: Reloading.
Jan 05 20:46:32 compute-0 systemd-rc-local-generator[98314]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:46:32 compute-0 systemd-sysv-generator[98318]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:46:32 compute-0 systemd[98264]: Queued start job for default target Main User Target.
Jan 05 20:46:32 compute-0 systemd[98264]: Created slice User Application Slice.
Jan 05 20:46:32 compute-0 systemd[98264]: Mark boot as successful after the user session has run 2 minutes was skipped because of an unmet condition check (ConditionUser=!@system).
Jan 05 20:46:32 compute-0 systemd[98264]: Started Daily Cleanup of User's Temporary Directories.
Jan 05 20:46:32 compute-0 systemd[98264]: Reached target Paths.
Jan 05 20:46:32 compute-0 systemd[98264]: Reached target Timers.
Jan 05 20:46:32 compute-0 systemd[98264]: Starting D-Bus User Message Bus Socket...
Jan 05 20:46:32 compute-0 systemd[98264]: Starting Create User's Volatile Files and Directories...
Jan 05 20:46:32 compute-0 systemd[98264]: Finished Create User's Volatile Files and Directories.
Jan 05 20:46:32 compute-0 systemd[98264]: Listening on D-Bus User Message Bus Socket.
Jan 05 20:46:32 compute-0 systemd[98264]: Reached target Sockets.
Jan 05 20:46:32 compute-0 systemd[98264]: Reached target Basic System.
Jan 05 20:46:32 compute-0 systemd[98264]: Reached target Main User Target.
Jan 05 20:46:32 compute-0 systemd[98264]: Startup finished in 159ms.
Jan 05 20:46:32 compute-0 systemd[1]: Started User Manager for UID 0.
Jan 05 20:46:32 compute-0 systemd[1]: Started ovn_controller container.
Jan 05 20:46:32 compute-0 systemd[1]: Started Session c1 of User root.
Jan 05 20:46:32 compute-0 sudo[98171]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:32 compute-0 ovn_controller[98229]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 05 20:46:32 compute-0 ovn_controller[98229]: INFO:__main__:Validating config file
Jan 05 20:46:32 compute-0 ovn_controller[98229]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 05 20:46:32 compute-0 ovn_controller[98229]: INFO:__main__:Writing out command to execute
Jan 05 20:46:32 compute-0 systemd[1]: session-c1.scope: Deactivated successfully.
Jan 05 20:46:32 compute-0 ovn_controller[98229]: ++ cat /run_command
Jan 05 20:46:32 compute-0 ovn_controller[98229]: + CMD='/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 05 20:46:32 compute-0 ovn_controller[98229]: + ARGS=
Jan 05 20:46:32 compute-0 ovn_controller[98229]: + sudo kolla_copy_cacerts
Jan 05 20:46:32 compute-0 systemd[1]: Started Session c2 of User root.
Jan 05 20:46:32 compute-0 ovn_controller[98229]: + [[ ! -n '' ]]
Jan 05 20:46:32 compute-0 systemd[1]: session-c2.scope: Deactivated successfully.
Jan 05 20:46:32 compute-0 ovn_controller[98229]: + . kolla_extend_start
Jan 05 20:46:32 compute-0 ovn_controller[98229]: + echo 'Running command: '\''/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '\'''
Jan 05 20:46:32 compute-0 ovn_controller[98229]: Running command: '/usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock  -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt '
Jan 05 20:46:32 compute-0 ovn_controller[98229]: + umask 0022
Jan 05 20:46:32 compute-0 ovn_controller[98229]: + exec /usr/bin/ovn-controller --pidfile unix:/run/openvswitch/db.sock -p /etc/pki/tls/private/ovndb.key -c /etc/pki/tls/certs/ovndb.crt -C /etc/pki/tls/certs/ovndbca.crt
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00001|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00002|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00003|main|INFO|OVN internal version is : [24.03.8-20.33.0-76.8]
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00004|main|INFO|OVS IDL reconnected, force recompute.
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00005|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00006|main|INFO|OVNSB IDL reconnected, force recompute.
Jan 05 20:46:32 compute-0 NetworkManager[56598]: <info>  [1767645992.8593] manager: (br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/14)
Jan 05 20:46:32 compute-0 NetworkManager[56598]: <info>  [1767645992.8601] device (br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 20:46:32 compute-0 NetworkManager[56598]: <warn>  [1767645992.8604] device (br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 05 20:46:32 compute-0 NetworkManager[56598]: <info>  [1767645992.8611] manager: (br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/15)
Jan 05 20:46:32 compute-0 NetworkManager[56598]: <info>  [1767645992.8617] manager: (br-int): new Open vSwitch Bridge device (/org/freedesktop/NetworkManager/Devices/16)
Jan 05 20:46:32 compute-0 NetworkManager[56598]: <info>  [1767645992.8622] device (br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00007|reconnect|INFO|ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00008|features|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00009|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 05 20:46:32 compute-0 kernel: br-int: entered promiscuous mode
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00010|rconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: connection failed (No such file or directory)
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00011|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: waiting 1 seconds before reconnect
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00012|reconnect|INFO|unix:/run/openvswitch/db.sock: connecting...
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00013|ofctrl|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00014|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00015|rconn|WARN|unix:/var/run/openvswitch/br-int.mgmt: connection failed (No such file or directory)
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00016|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: waiting 1 seconds before reconnect
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00017|reconnect|INFO|unix:/run/openvswitch/db.sock: connected
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00018|features|INFO|OVS DB schema supports 4 flow table prefixes, our IDL supports: 4
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00019|main|INFO|Setting flow table prefixes: ip_src, ip_dst, ipv6_src, ipv6_dst.
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00020|features|INFO|OVS Feature: ct_zero_snat, state: supported
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00021|features|INFO|OVS Feature: ct_flush, state: supported
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00022|features|INFO|OVS Feature: dp_hash_l4_sym_support, state: supported
Jan 05 20:46:32 compute-0 ovn_controller[98229]: 2026-01-05T20:46:32Z|00023|main|INFO|OVS feature set changed, force recompute.
Jan 05 20:46:32 compute-0 systemd-udevd[98361]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 20:46:33 compute-0 python3.9[98489]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 05 20:46:33 compute-0 ovn_controller[98229]: 2026-01-05T20:46:33Z|00024|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 05 20:46:33 compute-0 ovn_controller[98229]: 2026-01-05T20:46:33Z|00025|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 05 20:46:33 compute-0 ovn_controller[98229]: 2026-01-05T20:46:33Z|00026|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 05 20:46:33 compute-0 ovn_controller[98229]: 2026-01-05T20:46:33Z|00027|rconn|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 05 20:46:33 compute-0 ovn_controller[98229]: 2026-01-05T20:46:33Z|00028|ofctrl|INFO|ofctrl-wait-before-clear is now 8000 ms (was 0 ms)
Jan 05 20:46:33 compute-0 ovn_controller[98229]: 2026-01-05T20:46:33Z|00029|main|INFO|OVS OpenFlow connection reconnected,force recompute.
Jan 05 20:46:33 compute-0 ovn_controller[98229]: 2026-01-05T20:46:33Z|00030|main|INFO|OVS feature set changed, force recompute.
Jan 05 20:46:33 compute-0 ovn_controller[98229]: 2026-01-05T20:46:33Z|00001|pinctrl(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 05 20:46:33 compute-0 ovn_controller[98229]: 2026-01-05T20:46:33Z|00002|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 05 20:46:33 compute-0 ovn_controller[98229]: 2026-01-05T20:46:33Z|00001|statctrl(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting to switch
Jan 05 20:46:33 compute-0 ovn_controller[98229]: 2026-01-05T20:46:33Z|00002|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connecting...
Jan 05 20:46:33 compute-0 ovn_controller[98229]: 2026-01-05T20:46:33Z|00003|rconn(ovn_pinctrl0)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 05 20:46:33 compute-0 ovn_controller[98229]: 2026-01-05T20:46:33Z|00003|rconn(ovn_statctrl3)|INFO|unix:/var/run/openvswitch/br-int.mgmt: connected
Jan 05 20:46:33 compute-0 NetworkManager[56598]: <info>  [1767645993.8730] manager: (ovn-e716d6-0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/17)
Jan 05 20:46:33 compute-0 kernel: genev_sys_6081: entered promiscuous mode
Jan 05 20:46:33 compute-0 systemd-udevd[98363]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 20:46:33 compute-0 NetworkManager[56598]: <info>  [1767645993.8976] device (genev_sys_6081): carrier: link connected
Jan 05 20:46:33 compute-0 NetworkManager[56598]: <info>  [1767645993.8980] manager: (genev_sys_6081): new Generic device (/org/freedesktop/NetworkManager/Devices/18)
Jan 05 20:46:34 compute-0 sudo[98642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vozdyhcjpnemloubmmrupxidgddsmkwt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645994.0310504-632-107054294068248/AnsiballZ_stat.py'
Jan 05 20:46:34 compute-0 sudo[98642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:34 compute-0 python3.9[98644]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:46:34 compute-0 sudo[98642]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:35 compute-0 sudo[98765]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tecjpwveleazflyqzqqexmphlolldlym ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645994.0310504-632-107054294068248/AnsiballZ_copy.py'
Jan 05 20:46:35 compute-0 sudo[98765]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:35 compute-0 python3.9[98767]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767645994.0310504-632-107054294068248/.source.yaml _original_basename=.jx4pa26r follow=False checksum=f981a8bd210d3aa0f09ca46002d69abd32d33522 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:46:35 compute-0 sudo[98765]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:35 compute-0 sudo[98917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ssgbtixylgtavwvdhbdstkgjptfuwocj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645995.5422645-647-106935228243459/AnsiballZ_command.py'
Jan 05 20:46:35 compute-0 sudo[98917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:36 compute-0 python3.9[98919]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove open . other_config hw-offload
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:46:36 compute-0 ovs-vsctl[98920]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove open . other_config hw-offload
Jan 05 20:46:36 compute-0 sudo[98917]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:36 compute-0 sudo[99070]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvalggwamwrdvpfvkktkhitrpsewiwtq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645996.3888693-655-147439861088350/AnsiballZ_command.py'
Jan 05 20:46:36 compute-0 sudo[99070]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:36 compute-0 python3.9[99072]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl get Open_vSwitch . external_ids:ovn-cms-options | sed 's/\"//g'
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:46:36 compute-0 ovs-vsctl[99074]: ovs|00001|db_ctl_base|ERR|no key "ovn-cms-options" in Open_vSwitch record "." column external_ids
Jan 05 20:46:36 compute-0 sudo[99070]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:37 compute-0 sudo[99225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zafydqpdhjoxpihequxjtsxfmwznhnwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767645997.4325361-669-32119450658394/AnsiballZ_command.py'
Jan 05 20:46:37 compute-0 sudo[99225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:38 compute-0 python3.9[99227]: ansible-ansible.legacy.command Invoked with _raw_params=ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
                                             _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:46:38 compute-0 ovs-vsctl[99228]: ovs|00001|vsctl|INFO|Called as ovs-vsctl remove Open_vSwitch . external_ids ovn-cms-options
Jan 05 20:46:38 compute-0 sudo[99225]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:38 compute-0 sshd-session[87583]: Connection closed by 192.168.122.30 port 52912
Jan 05 20:46:38 compute-0 sshd-session[87580]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:46:38 compute-0 systemd[1]: session-19.scope: Deactivated successfully.
Jan 05 20:46:38 compute-0 systemd[1]: session-19.scope: Consumed 58.938s CPU time.
Jan 05 20:46:38 compute-0 systemd-logind[788]: Session 19 logged out. Waiting for processes to exit.
Jan 05 20:46:38 compute-0 systemd-logind[788]: Removed session 19.
Jan 05 20:46:42 compute-0 systemd[1]: Stopping User Manager for UID 0...
Jan 05 20:46:42 compute-0 systemd[98264]: Activating special unit Exit the Session...
Jan 05 20:46:42 compute-0 systemd[98264]: Stopped target Main User Target.
Jan 05 20:46:42 compute-0 systemd[98264]: Stopped target Basic System.
Jan 05 20:46:42 compute-0 systemd[98264]: Stopped target Paths.
Jan 05 20:46:42 compute-0 systemd[98264]: Stopped target Sockets.
Jan 05 20:46:42 compute-0 systemd[98264]: Stopped target Timers.
Jan 05 20:46:42 compute-0 systemd[98264]: Stopped Daily Cleanup of User's Temporary Directories.
Jan 05 20:46:42 compute-0 systemd[98264]: Closed D-Bus User Message Bus Socket.
Jan 05 20:46:42 compute-0 systemd[98264]: Stopped Create User's Volatile Files and Directories.
Jan 05 20:46:42 compute-0 systemd[98264]: Removed slice User Application Slice.
Jan 05 20:46:42 compute-0 systemd[98264]: Reached target Shutdown.
Jan 05 20:46:42 compute-0 systemd[98264]: Finished Exit the Session.
Jan 05 20:46:42 compute-0 systemd[98264]: Reached target Exit the Session.
Jan 05 20:46:42 compute-0 systemd[1]: user@0.service: Deactivated successfully.
Jan 05 20:46:42 compute-0 systemd[1]: Stopped User Manager for UID 0.
Jan 05 20:46:42 compute-0 systemd[1]: Stopping User Runtime Directory /run/user/0...
Jan 05 20:46:42 compute-0 systemd[1]: run-user-0.mount: Deactivated successfully.
Jan 05 20:46:42 compute-0 systemd[1]: user-runtime-dir@0.service: Deactivated successfully.
Jan 05 20:46:42 compute-0 systemd[1]: Stopped User Runtime Directory /run/user/0.
Jan 05 20:46:42 compute-0 systemd[1]: Removed slice User Slice of UID 0.
Jan 05 20:46:44 compute-0 sshd-session[99254]: Accepted publickey for zuul from 192.168.122.30 port 40704 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:46:44 compute-0 systemd-logind[788]: New session 21 of user zuul.
Jan 05 20:46:44 compute-0 systemd[1]: Started Session 21 of User zuul.
Jan 05 20:46:44 compute-0 sshd-session[99254]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:46:45 compute-0 python3.9[99407]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:46:46 compute-0 sudo[99561]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcoergiqrnoxqycsobviqsxckhydltrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646005.7614849-34-239579759724958/AnsiballZ_file.py'
Jan 05 20:46:46 compute-0 sudo[99561]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:46 compute-0 python3.9[99563]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/var/lib/openstack/neutron-ovn-metadata-agent setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:46:46 compute-0 sudo[99561]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:47 compute-0 sudo[99713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yfozvjjoqmcqfuhgqpewhypchxilfkaf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646006.6677525-34-10504062044388/AnsiballZ_file.py'
Jan 05 20:46:47 compute-0 sudo[99713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:47 compute-0 python3.9[99715]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:46:47 compute-0 sudo[99713]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:47 compute-0 sudo[99865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pktroqbxbpdfpjsiqmjapxatcyvcjxey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646007.4282117-34-92039973440953/AnsiballZ_file.py'
Jan 05 20:46:47 compute-0 sudo[99865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:48 compute-0 python3.9[99867]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/kill_scripts setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:46:48 compute-0 sudo[99865]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:48 compute-0 sudo[100017]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfkqyqkfuzbspzrpsdtivuyuovzwsqkc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646008.3432183-34-170574562572097/AnsiballZ_file.py'
Jan 05 20:46:48 compute-0 sudo[100017]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:48 compute-0 python3.9[100019]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/ovn-metadata-proxy setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:46:49 compute-0 sudo[100017]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:49 compute-0 sudo[100169]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lirjlebzxmwjfidqggbiwquslzjucbjy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646009.196853-34-72592690835519/AnsiballZ_file.py'
Jan 05 20:46:49 compute-0 sudo[100169]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:49 compute-0 python3.9[100171]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/neutron/external/pids setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:46:49 compute-0 sudo[100169]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:50 compute-0 python3.9[100321]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'selinux'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:46:52 compute-0 sudo[100472]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ybxhoejokghtnnbiihsodhnhkkhknhft ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646010.8343318-78-232505392596646/AnsiballZ_seboolean.py'
Jan 05 20:46:52 compute-0 sudo[100472]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:52 compute-0 python3.9[100474]: ansible-ansible.posix.seboolean Invoked with name=virt_sandbox_use_netlink persistent=True state=True ignore_selinux_state=False
Jan 05 20:46:53 compute-0 sudo[100472]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:54 compute-0 python3.9[100624]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/ovn_metadata_haproxy_wrapper follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:46:55 compute-0 python3.9[100747]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/ovn_metadata_haproxy_wrapper mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646013.3595877-86-78483929182336/.source follow=False _original_basename=haproxy.j2 checksum=a5072e7b19ca96a1f495d94f97f31903737cfd27 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:46:56 compute-0 python3.9[100897]: ansible-ansible.legacy.stat Invoked with path=/var/lib/neutron/kill_scripts/haproxy-kill follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:46:56 compute-0 sshd-session[100625]: Connection closed by authenticating user root 43.226.60.137 port 47074 [preauth]
Jan 05 20:46:56 compute-0 python3.9[101018]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/neutron/kill_scripts/haproxy-kill mode=0755 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646015.5890179-101-104590397543573/.source follow=False _original_basename=kill-script.j2 checksum=2dfb5489f491f61b95691c3bf95fa1fe48ff3700 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:46:57 compute-0 sudo[101168]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxfsjojmauniamypxdzprdxyqlcurrmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646017.0904355-118-107272642924199/AnsiballZ_setup.py'
Jan 05 20:46:57 compute-0 sudo[101168]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:57 compute-0 python3.9[101170]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 20:46:58 compute-0 sudo[101168]: pam_unix(sudo:session): session closed for user root
Jan 05 20:46:58 compute-0 sudo[101252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ymljveisyytqpectnnywxscqoybyotab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646017.0904355-118-107272642924199/AnsiballZ_dnf.py'
Jan 05 20:46:58 compute-0 sudo[101252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:46:58 compute-0 python3.9[101254]: ansible-ansible.legacy.dnf Invoked with name=['openvswitch'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:47:00 compute-0 sudo[101252]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:01 compute-0 sudo[101405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqrcafvvzhjyjzxbcrodfnsccrbdnbmu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646020.4281971-130-21966806787314/AnsiballZ_systemd.py'
Jan 05 20:47:01 compute-0 sudo[101405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:01 compute-0 python3.9[101407]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=openvswitch.service state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 05 20:47:01 compute-0 sudo[101405]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:02 compute-0 python3.9[101560]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:47:02 compute-0 ovn_controller[98229]: 2026-01-05T20:47:02Z|00031|memory|INFO|16000 kB peak resident set size after 29.9 seconds
Jan 05 20:47:02 compute-0 ovn_controller[98229]: 2026-01-05T20:47:02Z|00032|memory|INFO|idl-cells-OVN_Southbound:239 idl-cells-Open_vSwitch:471 ofctrl_desired_flow_usage-KB:5 ofctrl_installed_flow_usage-KB:4 ofctrl_sb_flow_ref_usage-KB:2
Jan 05 20:47:02 compute-0 podman[101584]: 2026-01-05 20:47:02.793665151 +0000 UTC m=+0.133173486 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_controller)
Jan 05 20:47:03 compute-0 python3.9[101707]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-rootwrap.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646021.9030223-138-138131443574260/.source.conf follow=False _original_basename=rootwrap.conf.j2 checksum=11f2cfb4b7d97b2cef3c2c2d88089e6999cffe22 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:47:03 compute-0 python3.9[101857]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:47:04 compute-0 python3.9[101978]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/01-neutron-ovn-metadata-agent.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646023.2944992-138-10670483575579/.source.conf follow=False _original_basename=neutron-ovn-metadata-agent.conf.j2 checksum=8bc979abbe81c2cf3993a225517a7e2483e20443 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:47:05 compute-0 python3.9[102128]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:47:06 compute-0 python3.9[102249]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/10-neutron-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646025.2517302-182-265339891179509/.source.conf _original_basename=10-neutron-metadata.conf follow=False checksum=ca7d4d155f5b812fab1a3b70e34adb495d291b8d backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:47:07 compute-0 python3.9[102399]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:47:07 compute-0 python3.9[102520]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/neutron-ovn-metadata-agent/05-nova-metadata.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646026.6262743-182-159388651431945/.source.conf _original_basename=05-nova-metadata.conf follow=False checksum=a14d6b38898a379cd37fc0bf365d17f10859446f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:47:08 compute-0 python3.9[102670]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:47:09 compute-0 sudo[102822]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-favpzqmagodjyuoiylzhxljpkndbczyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646029.1196015-220-223073197131171/AnsiballZ_file.py'
Jan 05 20:47:09 compute-0 sudo[102822]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:09 compute-0 python3.9[102824]: ansible-ansible.builtin.file Invoked with path=/var/local/libexec recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:47:09 compute-0 sudo[102822]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:10 compute-0 sudo[102974]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rpgwaodtegaagqlktqhznlpzwiwalhqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646029.9576397-228-30360657721844/AnsiballZ_stat.py'
Jan 05 20:47:10 compute-0 sudo[102974]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:10 compute-0 python3.9[102976]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-container-shutdown follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:47:10 compute-0 sudo[102974]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:10 compute-0 sudo[103052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcoqpbicknoonmwqnaujrwmuogmdbogj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646029.9576397-228-30360657721844/AnsiballZ_file.py'
Jan 05 20:47:10 compute-0 sudo[103052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:11 compute-0 python3.9[103054]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-container-shutdown _original_basename=edpm-container-shutdown recurse=False state=file path=/var/local/libexec/edpm-container-shutdown force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:47:11 compute-0 sudo[103052]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:11 compute-0 sudo[103204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yypgsltqghquziektpwcrvzrppcsnwjk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646031.3688471-228-185414304600879/AnsiballZ_stat.py'
Jan 05 20:47:11 compute-0 sudo[103204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:11 compute-0 python3.9[103206]: ansible-ansible.legacy.stat Invoked with path=/var/local/libexec/edpm-start-podman-container follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:47:12 compute-0 sudo[103204]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:12 compute-0 sudo[103282]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ouwordkyfuzgpkmthwbszqdcznzeegus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646031.3688471-228-185414304600879/AnsiballZ_file.py'
Jan 05 20:47:12 compute-0 sudo[103282]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:12 compute-0 python3.9[103284]: ansible-ansible.legacy.file Invoked with group=root mode=0700 owner=root setype=container_file_t dest=/var/local/libexec/edpm-start-podman-container _original_basename=edpm-start-podman-container recurse=False state=file path=/var/local/libexec/edpm-start-podman-container force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:47:12 compute-0 sudo[103282]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:13 compute-0 sudo[103434]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekxbaiiztiahmpjhcmgddwmamkheyrxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646032.7918406-251-174928688945792/AnsiballZ_file.py'
Jan 05 20:47:13 compute-0 sudo[103434]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:13 compute-0 python3.9[103436]: ansible-ansible.builtin.file Invoked with mode=420 path=/etc/systemd/system-preset state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:47:13 compute-0 sudo[103434]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:14 compute-0 sudo[103586]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jblmdococgwhkgkgklztwgdsemkruvzy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646033.7286057-259-92622728610438/AnsiballZ_stat.py'
Jan 05 20:47:14 compute-0 sudo[103586]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:14 compute-0 python3.9[103588]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm-container-shutdown.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:47:14 compute-0 sudo[103586]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:14 compute-0 sudo[103664]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qrgsnpabodcxsmhqaocsikmdbgfikcci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646033.7286057-259-92622728610438/AnsiballZ_file.py'
Jan 05 20:47:14 compute-0 sudo[103664]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:14 compute-0 python3.9[103666]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/edpm-container-shutdown.service _original_basename=edpm-container-shutdown-service recurse=False state=file path=/etc/systemd/system/edpm-container-shutdown.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:47:14 compute-0 sudo[103664]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:15 compute-0 sudo[103816]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jjxiyrmgxrhposoaitktpzjmxfybnpfb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646035.175219-271-33859288995779/AnsiballZ_stat.py'
Jan 05 20:47:15 compute-0 sudo[103816]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:15 compute-0 python3.9[103818]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:47:15 compute-0 sudo[103816]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:16 compute-0 sudo[103894]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojybulqragdeshifethlypwygvoutfxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646035.175219-271-33859288995779/AnsiballZ_file.py'
Jan 05 20:47:16 compute-0 sudo[103894]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:16 compute-0 python3.9[103896]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-edpm-container-shutdown.preset _original_basename=91-edpm-container-shutdown-preset recurse=False state=file path=/etc/systemd/system-preset/91-edpm-container-shutdown.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:47:16 compute-0 sudo[103894]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:17 compute-0 sudo[104046]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxtaeioychjgplclyybojlpdqmjjgmad ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646036.618956-283-110044764177237/AnsiballZ_systemd.py'
Jan 05 20:47:17 compute-0 sudo[104046]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:17 compute-0 python3.9[104048]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm-container-shutdown state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:47:17 compute-0 systemd[1]: Reloading.
Jan 05 20:47:17 compute-0 systemd-rc-local-generator[104070]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:47:17 compute-0 systemd-sysv-generator[104077]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:47:17 compute-0 sudo[104046]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:18 compute-0 sudo[104236]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-grfdgnpdyjstpoepsobljdydlspgnldd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646038.0056167-291-275055908759379/AnsiballZ_stat.py'
Jan 05 20:47:18 compute-0 sudo[104236]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:18 compute-0 python3.9[104238]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/netns-placeholder.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:47:18 compute-0 sudo[104236]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:18 compute-0 sudo[104314]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dlxrvzvfbebhstgrbsunkhwyvltbrwmz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646038.0056167-291-275055908759379/AnsiballZ_file.py'
Jan 05 20:47:18 compute-0 sudo[104314]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:19 compute-0 python3.9[104316]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system/netns-placeholder.service _original_basename=netns-placeholder-service recurse=False state=file path=/etc/systemd/system/netns-placeholder.service force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:47:19 compute-0 sudo[104314]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:19 compute-0 sudo[104466]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojmfklzpxrujamduylqhnyyvfebnyygq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646039.4022412-303-166620818723035/AnsiballZ_stat.py'
Jan 05 20:47:19 compute-0 sudo[104466]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:19 compute-0 python3.9[104468]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system-preset/91-netns-placeholder.preset follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:47:20 compute-0 sudo[104466]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:20 compute-0 sudo[104544]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyidisagoswwzqipjbyqarbzzoetzzea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646039.4022412-303-166620818723035/AnsiballZ_file.py'
Jan 05 20:47:20 compute-0 sudo[104544]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:20 compute-0 python3.9[104546]: ansible-ansible.legacy.file Invoked with group=root mode=0644 owner=root dest=/etc/systemd/system-preset/91-netns-placeholder.preset _original_basename=91-netns-placeholder-preset recurse=False state=file path=/etc/systemd/system-preset/91-netns-placeholder.preset force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:47:20 compute-0 sudo[104544]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:21 compute-0 sudo[104696]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyusabvfryouhosowjfvwmolocbyasnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646040.7573414-315-61313617243875/AnsiballZ_systemd.py'
Jan 05 20:47:21 compute-0 sudo[104696]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:21 compute-0 python3.9[104698]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=netns-placeholder state=started daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:47:21 compute-0 systemd[1]: Reloading.
Jan 05 20:47:21 compute-0 systemd-rc-local-generator[104727]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:47:21 compute-0 systemd-sysv-generator[104731]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:47:21 compute-0 systemd[1]: Starting Create netns directory...
Jan 05 20:47:21 compute-0 systemd[1]: run-netns-placeholder.mount: Deactivated successfully.
Jan 05 20:47:21 compute-0 systemd[1]: netns-placeholder.service: Deactivated successfully.
Jan 05 20:47:21 compute-0 systemd[1]: Finished Create netns directory.
Jan 05 20:47:21 compute-0 sudo[104696]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:22 compute-0 sudo[104889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gckbbftjwneaojpigwekpmyaiadxechq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646042.1739347-325-153753632497360/AnsiballZ_file.py'
Jan 05 20:47:22 compute-0 sudo[104889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:22 compute-0 python3.9[104891]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:47:22 compute-0 sudo[104889]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:23 compute-0 sudo[105041]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bdwloecuqsvpotfufykqevklnhjdevjd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646042.96494-333-62463017776512/AnsiballZ_stat.py'
Jan 05 20:47:23 compute-0 sudo[105041]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:23 compute-0 python3.9[105043]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ovn_metadata_agent/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:47:23 compute-0 sudo[105041]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:24 compute-0 sudo[105164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmildjpisdkdaohhntmjqiebrnvgaxmb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646042.96494-333-62463017776512/AnsiballZ_copy.py'
Jan 05 20:47:24 compute-0 sudo[105164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:24 compute-0 python3.9[105166]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ovn_metadata_agent/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646042.96494-333-62463017776512/.source _original_basename=healthcheck follow=False checksum=898a5a1fcd473cf731177fc866e3bd7ebf20a131 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:47:24 compute-0 sudo[105164]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:25 compute-0 sudo[105316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqfdpjexvoqitfbzokqlmzkeqnzqrslf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646044.7488005-350-218868037673553/AnsiballZ_file.py'
Jan 05 20:47:25 compute-0 sudo[105316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:25 compute-0 python3.9[105318]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:47:25 compute-0 sudo[105316]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:26 compute-0 sudo[105468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aqgelwnewvjqnyfdkrhowqqxpfizfhex ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646045.6143913-358-220856233732409/AnsiballZ_file.py'
Jan 05 20:47:26 compute-0 sudo[105468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:26 compute-0 python3.9[105470]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:47:26 compute-0 sudo[105468]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:26 compute-0 sudo[105622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oyquqdcnvnkhfplftgvfpuuthfatvdpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646046.5127938-366-9116553139648/AnsiballZ_stat.py'
Jan 05 20:47:26 compute-0 sudo[105622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:27 compute-0 python3.9[105624]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ovn_metadata_agent.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:47:27 compute-0 sudo[105622]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:27 compute-0 sudo[105745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvzahzdriqijfpautsxoumrlgcuynvyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646046.5127938-366-9116553139648/AnsiballZ_copy.py'
Jan 05 20:47:27 compute-0 sudo[105745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:27 compute-0 python3.9[105747]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ovn_metadata_agent.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646046.5127938-366-9116553139648/.source.json _original_basename=.ietee9vg follow=False checksum=a908ef151ded3a33ae6c9ac8be72a35e5e33b9dc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:47:27 compute-0 sudo[105745]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:28 compute-0 python3.9[105897]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:47:28 compute-0 sshd-session[105495]: Invalid user test from 43.226.60.137 port 43076
Jan 05 20:47:29 compute-0 sshd-session[105495]: Connection closed by invalid user test 43.226.60.137 port 43076 [preauth]
Jan 05 20:47:30 compute-0 sudo[106318]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsubgonubyaxmrzkvwggompomrzwwbxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646050.4472222-406-180822142555351/AnsiballZ_container_config_data.py'
Jan 05 20:47:30 compute-0 sudo[106318]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:31 compute-0 python3.9[106320]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_pattern=*.json debug=False
Jan 05 20:47:31 compute-0 sudo[106318]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:32 compute-0 sudo[106470]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auhlilyopseptgwnxevucsswpfprvsim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646051.5765884-417-137646894492069/AnsiballZ_container_config_hash.py'
Jan 05 20:47:32 compute-0 sudo[106470]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:32 compute-0 python3.9[106472]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 20:47:32 compute-0 sudo[106470]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:33 compute-0 sudo[106632]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vuqulntuxnrabgonlcwoxlqfroulbcbs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646052.5642033-426-106495896205625/AnsiballZ_podman_container_info.py'
Jan 05 20:47:33 compute-0 sudo[106632]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:33 compute-0 podman[106596]: 2026-01-05 20:47:33.210577521 +0000 UTC m=+0.170181840 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 20:47:33 compute-0 python3.9[106641]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 05 20:47:33 compute-0 sudo[106632]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:34 compute-0 sudo[106829]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmcwrcoxnjcalnqcxmepipyssyhmhbfb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646054.0199-439-264254023843086/AnsiballZ_edpm_container_manage.py'
Jan 05 20:47:34 compute-0 sudo[106829]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:34 compute-0 python3[106831]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ovn_metadata_agent config_id=ovn_metadata_agent config_overrides={} config_patterns=*.json containers=['ovn_metadata_agent'] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 20:47:35 compute-0 podman[106869]: 2026-01-05 20:47:35.092610287 +0000 UTC m=+0.052802995 container create 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 05 20:47:35 compute-0 podman[106869]: 2026-01-05 20:47:35.061860541 +0000 UTC m=+0.022053289 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 20:47:35 compute-0 python3[106831]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ovn_metadata_agent --cgroupns=host --conmon-pidfile /run/ovn_metadata_agent.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env EDPM_CONFIG_HASH=ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d --healthcheck-command /openstack/healthcheck --label config_id=ovn_metadata_agent --label container_name=ovn_metadata_agent --label managed_by=edpm_ansible --label config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']} --log-driver journald --log-level info --network host --pid host --privileged=True --user root --volume /run/openvswitch:/run/openvswitch:z --volume /var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z --volume /run/netns:/run/netns:shared --volume /var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro --volume /var/lib/neutron:/var/lib/neutron:shared,z --volume /var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro --volume /var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro --volume /var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z --volume /var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z --volume /var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 20:47:35 compute-0 sudo[106829]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:35 compute-0 sudo[107057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-anwyhpworshhcxkmvnjlgfjrsaiielee ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646055.4741442-447-96397033784675/AnsiballZ_stat.py'
Jan 05 20:47:35 compute-0 sudo[107057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:36 compute-0 python3.9[107059]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:47:36 compute-0 sudo[107057]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:36 compute-0 sudo[107211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xotgutbgdtfmryrbbiwurfvkrphbisvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646056.370684-456-24560789538979/AnsiballZ_file.py'
Jan 05 20:47:36 compute-0 sudo[107211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:36 compute-0 python3.9[107213]: ansible-file Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:47:36 compute-0 sudo[107211]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:37 compute-0 sudo[107287]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwhfjcltahnlsytinundljmxufchmjba ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646056.370684-456-24560789538979/AnsiballZ_stat.py'
Jan 05 20:47:37 compute-0 sudo[107287]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:37 compute-0 python3.9[107289]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ovn_metadata_agent_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:47:37 compute-0 sudo[107287]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:38 compute-0 sudo[107438]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjjlqjyygqddqsjrwqspucvzpqkjwovh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646057.551247-456-134402779396887/AnsiballZ_copy.py'
Jan 05 20:47:38 compute-0 sudo[107438]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:38 compute-0 python3.9[107440]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767646057.551247-456-134402779396887/source dest=/etc/systemd/system/edpm_ovn_metadata_agent.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:47:38 compute-0 sudo[107438]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:38 compute-0 sudo[107514]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pkrltbaxxgjvpneavlstwtutuyrezzhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646057.551247-456-134402779396887/AnsiballZ_systemd.py'
Jan 05 20:47:38 compute-0 sudo[107514]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:39 compute-0 python3.9[107516]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 20:47:39 compute-0 systemd[1]: Reloading.
Jan 05 20:47:39 compute-0 systemd-rc-local-generator[107542]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:47:39 compute-0 systemd-sysv-generator[107548]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:47:39 compute-0 sudo[107514]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:39 compute-0 sudo[107626]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pprnyubeuewtgbembffdmhwgpribmeni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646057.551247-456-134402779396887/AnsiballZ_systemd.py'
Jan 05 20:47:39 compute-0 sudo[107626]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:40 compute-0 python3.9[107628]: ansible-systemd Invoked with state=restarted name=edpm_ovn_metadata_agent.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:47:40 compute-0 systemd[1]: Reloading.
Jan 05 20:47:40 compute-0 systemd-rc-local-generator[107658]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:47:40 compute-0 systemd-sysv-generator[107664]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:47:40 compute-0 systemd[1]: Starting ovn_metadata_agent container...
Jan 05 20:47:40 compute-0 systemd[1]: Started libcrun container.
Jan 05 20:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038c8c5406feba618b10289d23f15cb99c99ccc169d80f289c2b1b26e6d29291/merged/etc/neutron.conf.d supports timestamps until 2038 (0x7fffffff)
Jan 05 20:47:40 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/038c8c5406feba618b10289d23f15cb99c99ccc169d80f289c2b1b26e6d29291/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 20:47:40 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39.
Jan 05 20:47:40 compute-0 podman[107669]: 2026-01-05 20:47:40.792985698 +0000 UTC m=+0.206518402 container init 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: + sudo -E kolla_set_configs
Jan 05 20:47:40 compute-0 podman[107669]: 2026-01-05 20:47:40.830753818 +0000 UTC m=+0.244286492 container start 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 05 20:47:40 compute-0 edpm-start-podman-container[107669]: ovn_metadata_agent
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: INFO:__main__:Validating config file
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: INFO:__main__:Copying service configuration files
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: INFO:__main__:Deleting /etc/neutron/rootwrap.conf
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: INFO:__main__:Copying /etc/neutron.conf.d/01-rootwrap.conf to /etc/neutron/rootwrap.conf
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: INFO:__main__:Setting permission for /etc/neutron/rootwrap.conf
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: INFO:__main__:Writing out command to execute
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: INFO:__main__:Setting permission for /var/lib/neutron
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: INFO:__main__:Setting permission for /var/lib/neutron/ovn-metadata-proxy
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: INFO:__main__:Setting permission for /var/lib/neutron/external
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: INFO:__main__:Setting permission for /var/lib/neutron/ovn_metadata_haproxy_wrapper
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: INFO:__main__:Setting permission for /var/lib/neutron/kill_scripts/haproxy-kill
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: INFO:__main__:Setting permission for /var/lib/neutron/external/pids
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: ++ cat /run_command
Jan 05 20:47:40 compute-0 edpm-start-podman-container[107668]: Creating additional drop-in dependency for "ovn_metadata_agent" (490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39)
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: + CMD=neutron-ovn-metadata-agent
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: + ARGS=
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: + sudo kolla_copy_cacerts
Jan 05 20:47:40 compute-0 podman[107691]: 2026-01-05 20:47:40.950133915 +0000 UTC m=+0.095585555 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: + [[ ! -n '' ]]
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: + . kolla_extend_start
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: Running command: 'neutron-ovn-metadata-agent'
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: + echo 'Running command: '\''neutron-ovn-metadata-agent'\'''
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: + umask 0022
Jan 05 20:47:40 compute-0 ovn_metadata_agent[107684]: + exec neutron-ovn-metadata-agent
Jan 05 20:47:40 compute-0 systemd[1]: Reloading.
Jan 05 20:47:41 compute-0 systemd-rc-local-generator[107759]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:47:41 compute-0 systemd-sysv-generator[107765]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:47:41 compute-0 systemd[1]: Started ovn_metadata_agent container.
Jan 05 20:47:41 compute-0 sudo[107626]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:42 compute-0 python3.9[107923]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.777 107689 INFO neutron.common.config [-] Logging enabled!
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.778 107689 INFO neutron.common.config [-] /usr/bin/neutron-ovn-metadata-agent version 22.2.2.dev43
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.778 107689 DEBUG neutron.common.config [-] command line: /usr/bin/neutron-ovn-metadata-agent setup_logging /usr/lib/python3.9/site-packages/neutron/common/config.py:123
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.778 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.778 107689 DEBUG neutron.agent.ovn.metadata_agent [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.778 107689 DEBUG neutron.agent.ovn.metadata_agent [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.779 107689 DEBUG neutron.agent.ovn.metadata_agent [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.779 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.779 107689 DEBUG neutron.agent.ovn.metadata_agent [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.779 107689 DEBUG neutron.agent.ovn.metadata_agent [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.779 107689 DEBUG neutron.agent.ovn.metadata_agent [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.779 107689 DEBUG neutron.agent.ovn.metadata_agent [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.779 107689 DEBUG neutron.agent.ovn.metadata_agent [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.779 107689 DEBUG neutron.agent.ovn.metadata_agent [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.779 107689 DEBUG neutron.agent.ovn.metadata_agent [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.779 107689 DEBUG neutron.agent.ovn.metadata_agent [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.780 107689 DEBUG neutron.agent.ovn.metadata_agent [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.780 107689 DEBUG neutron.agent.ovn.metadata_agent [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.780 107689 DEBUG neutron.agent.ovn.metadata_agent [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.780 107689 DEBUG neutron.agent.ovn.metadata_agent [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.780 107689 DEBUG neutron.agent.ovn.metadata_agent [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.780 107689 DEBUG neutron.agent.ovn.metadata_agent [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.780 107689 DEBUG neutron.agent.ovn.metadata_agent [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.780 107689 DEBUG neutron.agent.ovn.metadata_agent [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.780 107689 DEBUG neutron.agent.ovn.metadata_agent [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.780 107689 DEBUG neutron.agent.ovn.metadata_agent [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.781 107689 DEBUG neutron.agent.ovn.metadata_agent [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.781 107689 DEBUG neutron.agent.ovn.metadata_agent [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.781 107689 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.781 107689 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.781 107689 DEBUG neutron.agent.ovn.metadata_agent [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.781 107689 DEBUG neutron.agent.ovn.metadata_agent [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.781 107689 DEBUG neutron.agent.ovn.metadata_agent [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.781 107689 DEBUG neutron.agent.ovn.metadata_agent [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.781 107689 DEBUG neutron.agent.ovn.metadata_agent [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.782 107689 DEBUG neutron.agent.ovn.metadata_agent [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.782 107689 DEBUG neutron.agent.ovn.metadata_agent [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.782 107689 DEBUG neutron.agent.ovn.metadata_agent [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.782 107689 DEBUG neutron.agent.ovn.metadata_agent [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.782 107689 DEBUG neutron.agent.ovn.metadata_agent [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.782 107689 DEBUG neutron.agent.ovn.metadata_agent [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.782 107689 DEBUG neutron.agent.ovn.metadata_agent [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.782 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.782 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.783 107689 DEBUG neutron.agent.ovn.metadata_agent [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.783 107689 DEBUG neutron.agent.ovn.metadata_agent [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.783 107689 DEBUG neutron.agent.ovn.metadata_agent [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.783 107689 DEBUG neutron.agent.ovn.metadata_agent [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.783 107689 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.783 107689 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.783 107689 DEBUG neutron.agent.ovn.metadata_agent [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.783 107689 DEBUG neutron.agent.ovn.metadata_agent [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.783 107689 DEBUG neutron.agent.ovn.metadata_agent [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.783 107689 DEBUG neutron.agent.ovn.metadata_agent [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.783 107689 DEBUG neutron.agent.ovn.metadata_agent [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.784 107689 DEBUG neutron.agent.ovn.metadata_agent [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.784 107689 DEBUG neutron.agent.ovn.metadata_agent [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.784 107689 DEBUG neutron.agent.ovn.metadata_agent [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.784 107689 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.784 107689 DEBUG neutron.agent.ovn.metadata_agent [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.784 107689 DEBUG neutron.agent.ovn.metadata_agent [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.784 107689 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.784 107689 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.784 107689 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.784 107689 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.785 107689 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.785 107689 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.785 107689 DEBUG neutron.agent.ovn.metadata_agent [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.785 107689 DEBUG neutron.agent.ovn.metadata_agent [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.785 107689 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.785 107689 DEBUG neutron.agent.ovn.metadata_agent [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.785 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.785 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.785 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.786 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.786 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.786 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.786 107689 DEBUG neutron.agent.ovn.metadata_agent [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.786 107689 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.786 107689 DEBUG neutron.agent.ovn.metadata_agent [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.786 107689 DEBUG neutron.agent.ovn.metadata_agent [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.786 107689 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.786 107689 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.787 107689 DEBUG neutron.agent.ovn.metadata_agent [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.787 107689 DEBUG neutron.agent.ovn.metadata_agent [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.787 107689 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.787 107689 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.787 107689 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.787 107689 DEBUG neutron.agent.ovn.metadata_agent [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.787 107689 DEBUG neutron.agent.ovn.metadata_agent [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.787 107689 DEBUG neutron.agent.ovn.metadata_agent [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.787 107689 DEBUG neutron.agent.ovn.metadata_agent [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.787 107689 DEBUG neutron.agent.ovn.metadata_agent [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.788 107689 DEBUG neutron.agent.ovn.metadata_agent [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.788 107689 DEBUG neutron.agent.ovn.metadata_agent [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.788 107689 DEBUG neutron.agent.ovn.metadata_agent [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.788 107689 DEBUG neutron.agent.ovn.metadata_agent [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.788 107689 DEBUG neutron.agent.ovn.metadata_agent [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.788 107689 DEBUG neutron.agent.ovn.metadata_agent [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.788 107689 DEBUG neutron.agent.ovn.metadata_agent [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.788 107689 DEBUG neutron.agent.ovn.metadata_agent [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.788 107689 DEBUG neutron.agent.ovn.metadata_agent [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.788 107689 DEBUG neutron.agent.ovn.metadata_agent [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.789 107689 DEBUG neutron.agent.ovn.metadata_agent [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.789 107689 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.789 107689 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.789 107689 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.789 107689 DEBUG neutron.agent.ovn.metadata_agent [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.789 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.789 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.789 107689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.789 107689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.789 107689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.790 107689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.790 107689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.790 107689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.790 107689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.790 107689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.790 107689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.790 107689 DEBUG neutron.agent.ovn.metadata_agent [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.790 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.790 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.791 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.791 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.791 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.791 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.791 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.791 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.791 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.791 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.791 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.792 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.792 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.792 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.792 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.792 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.792 107689 DEBUG neutron.agent.ovn.metadata_agent [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.792 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.792 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.792 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.793 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.793 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.793 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.793 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.793 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.793 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.793 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.793 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.793 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.793 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.794 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.794 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.794 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.794 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.794 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.794 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.794 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.794 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.794 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.795 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.795 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.795 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.795 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.795 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.795 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.795 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.795 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.795 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.795 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.796 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.796 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.796 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.796 107689 DEBUG neutron.agent.ovn.metadata_agent [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.796 107689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.796 107689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.796 107689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.796 107689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.797 107689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.797 107689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.797 107689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.797 107689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.797 107689 DEBUG neutron.agent.ovn.metadata_agent [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.797 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.797 107689 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.797 107689 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.797 107689 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.797 107689 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.798 107689 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.798 107689 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.798 107689 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.798 107689 DEBUG neutron.agent.ovn.metadata_agent [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.798 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.798 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.798 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.798 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.798 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.799 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.799 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.799 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.799 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.799 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.799 107689 DEBUG neutron.agent.ovn.metadata_agent [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.799 107689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.799 107689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.799 107689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.800 107689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.800 107689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.800 107689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.800 107689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.800 107689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.800 107689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.800 107689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.801 107689 DEBUG neutron.agent.ovn.metadata_agent [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.801 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.801 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.801 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.801 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.801 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.801 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.801 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.801 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.802 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.802 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.802 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.802 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.802 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.802 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.802 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.802 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.802 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.802 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.803 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.803 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.803 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.803 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.803 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.803 107689 DEBUG neutron.agent.ovn.metadata_agent [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.803 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.803 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.803 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.804 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.804 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.804 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.804 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.804 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.804 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.804 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.804 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.805 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.805 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.805 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.805 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.805 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.805 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.805 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.805 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.806 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.806 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.806 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.806 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.806 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.806 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.806 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.806 107689 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.807 107689 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.807 107689 DEBUG neutron.agent.ovn.metadata_agent [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.807 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.807 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.807 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.807 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.807 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.807 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.807 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.808 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.808 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.808 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.808 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.808 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.808 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.808 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.808 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.809 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.809 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.809 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.809 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.809 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.809 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.809 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.809 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.810 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.810 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.810 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.810 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.810 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.810 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.810 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.811 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.811 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.811 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.811 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.811 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.811 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.811 107689 DEBUG neutron.agent.ovn.metadata_agent [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.811 107689 DEBUG neutron.agent.ovn.metadata_agent [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.820 107689 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.821 107689 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.821 107689 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.821 107689 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connecting...
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.821 107689 INFO ovsdbapp.backend.ovs_idl.vlog [-] tcp:127.0.0.1:6640: connected
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.833 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Loaded chassis name d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b (UUID: d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b) and ovn bridge br-int. _load_config /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:309
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.854 107689 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.855 107689 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.855 107689 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.855 107689 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Chassis_Private.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.858 107689 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.864 107689 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.869 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched CREATE: ChassisPrivateCreateEvent(events=('create',), table='Chassis_Private', conditions=(('name', '=', 'd9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b'),), old_conditions=None), priority=20 to row=Chassis_Private(chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], external_ids={}, name=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, nb_cfg_timestamp=1767646001881, nb_cfg=1) old= matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.870 107689 DEBUG neutron_lib.callbacks.manager [-] Subscribe: <bound method MetadataProxyHandler.post_fork_initialize of <neutron.agent.ovn.metadata.server.MetadataProxyHandler object at 0x7f731da07dc0>> process after_init 55550000, False subscribe /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:52
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.871 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.871 107689 DEBUG oslo_concurrency.lockutils [-] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.871 107689 DEBUG oslo_concurrency.lockutils [-] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.871 107689 INFO oslo_service.service [-] Starting 1 workers
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.876 107689 DEBUG oslo_service.service [-] Started child 108054 _start_child /usr/lib/python3.9/site-packages/oslo_service/service.py:575
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.879 107689 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.namespace_cmd', '--privsep_sock_path', '/tmp/tmpqcv6q3h3/privsep.sock']
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.880 108054 DEBUG neutron_lib.callbacks.manager [-] Publish callbacks ['neutron.agent.ovn.metadata.server.MetadataProxyHandler.post_fork_initialize-427764'] for process (None), after_init _notify_loop /usr/lib/python3.9/site-packages/neutron_lib/callbacks/manager.py:184
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.905 108054 INFO neutron.agent.ovn.metadata.ovsdb [-] Getting OvsdbSbOvnIdl for MetadataAgent with retry
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.906 108054 DEBUG ovsdbapp.backend.ovs_idl [-] Created lookup_table index Chassis.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:87
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.906 108054 DEBUG ovsdbapp.backend.ovs_idl [-] Created schema index Datapath_Binding.tunnel_key autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.909 108054 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connecting...
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.915 108054 INFO ovsdbapp.backend.ovs_idl.vlog [-] ssl:ovsdbserver-sb.openstack.svc:6642: connected
Jan 05 20:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:42.921 108054 INFO eventlet.wsgi.server [-] (108054) wsgi starting up on http:/var/lib/neutron/metadata_proxy
Jan 05 20:47:42 compute-0 sudo[108076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-whvbcwormodoqfgcqsjctbewsplcrbva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646062.6235003-497-41695880440004/AnsiballZ_stat.py'
Jan 05 20:47:42 compute-0 sudo[108076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:43 compute-0 python3.9[108079]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:47:43 compute-0 sudo[108076]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:43 compute-0 kernel: capability: warning: `privsep-helper' uses deprecated v2 capabilities in a way that may be insecure
Jan 05 20:47:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:43.599 107689 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 05 20:47:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:43.599 107689 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpqcv6q3h3/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 05 20:47:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:43.423 108136 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 05 20:47:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:43.430 108136 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 05 20:47:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:43.434 108136 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_SYS_ADMIN/CAP_SYS_ADMIN/none
Jan 05 20:47:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:43.435 108136 INFO oslo.privsep.daemon [-] privsep daemon running as pid 108136
Jan 05 20:47:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:43.602 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[a0043c80-a754-4fc2-bf39-58231fe99f54]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 20:47:43 compute-0 sudo[108207]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgadrcscmunugjlrpnebgajptghweapv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646062.6235003-497-41695880440004/AnsiballZ_copy.py'
Jan 05 20:47:43 compute-0 sudo[108207]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:43 compute-0 python3.9[108209]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646062.6235003-497-41695880440004/.source.yaml _original_basename=.8n7i2tjw follow=False checksum=d18dac792a058922add8562e7ae25d94fd2a1fd2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:47:43 compute-0 sudo[108207]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.182 108136 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.182 108136 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.182 108136 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:47:44 compute-0 sshd-session[99257]: Connection closed by 192.168.122.30 port 40704
Jan 05 20:47:44 compute-0 sshd-session[99254]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:47:44 compute-0 systemd[1]: session-21.scope: Deactivated successfully.
Jan 05 20:47:44 compute-0 systemd[1]: session-21.scope: Consumed 44.881s CPU time.
Jan 05 20:47:44 compute-0 systemd-logind[788]: Session 21 logged out. Waiting for processes to exit.
Jan 05 20:47:44 compute-0 systemd-logind[788]: Removed session 21.
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.808 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[38adb969-5b83-4beb-92c4-44ab9f664aeb]: (4, []) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.812 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbAddCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, column=external_ids, values=({'neutron:ovn-metadata-id': '9922e0f0-90ba-506b-a173-7b869183c07a'},)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.823 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-bridge': 'br-int'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.829 107689 DEBUG oslo_service.service [-] Full set of CONF: wait /usr/lib/python3.9/site-packages/oslo_service/service.py:649
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.829 107689 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.830 107689 DEBUG oslo_service.service [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.830 107689 DEBUG oslo_service.service [-] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.830 107689 DEBUG oslo_service.service [-] config files: ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.830 107689 DEBUG oslo_service.service [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.830 107689 DEBUG oslo_service.service [-] agent_down_time                = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.830 107689 DEBUG oslo_service.service [-] allow_bulk                     = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.830 107689 DEBUG oslo_service.service [-] api_extensions_path            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.830 107689 DEBUG oslo_service.service [-] api_paste_config               = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.831 107689 DEBUG oslo_service.service [-] api_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.831 107689 DEBUG oslo_service.service [-] auth_ca_cert                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.831 107689 DEBUG oslo_service.service [-] auth_strategy                  = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.831 107689 DEBUG oslo_service.service [-] backlog                        = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.831 107689 DEBUG oslo_service.service [-] base_mac                       = fa:16:3e:00:00:00 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.831 107689 DEBUG oslo_service.service [-] bind_host                      = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.831 107689 DEBUG oslo_service.service [-] bind_port                      = 9696 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.831 107689 DEBUG oslo_service.service [-] client_socket_timeout          = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.832 107689 DEBUG oslo_service.service [-] config_dir                     = ['/etc/neutron.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.832 107689 DEBUG oslo_service.service [-] config_file                    = ['/etc/neutron/neutron.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.832 107689 DEBUG oslo_service.service [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.832 107689 DEBUG oslo_service.service [-] control_exchange               = neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.832 107689 DEBUG oslo_service.service [-] core_plugin                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.832 107689 DEBUG oslo_service.service [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.832 107689 DEBUG oslo_service.service [-] default_availability_zones     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.832 107689 DEBUG oslo_service.service [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'OFPHandler=INFO', 'OfctlService=INFO', 'os_ken.base.app_manager=INFO', 'os_ken.controller.controller=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.833 107689 DEBUG oslo_service.service [-] dhcp_agent_notification        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.833 107689 DEBUG oslo_service.service [-] dhcp_lease_duration            = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.833 107689 DEBUG oslo_service.service [-] dhcp_load_type                 = networks log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.833 107689 DEBUG oslo_service.service [-] dns_domain                     = openstacklocal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.833 107689 DEBUG oslo_service.service [-] enable_new_agents              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.833 107689 DEBUG oslo_service.service [-] enable_traditional_dhcp        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.833 107689 DEBUG oslo_service.service [-] external_dns_driver            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.833 107689 DEBUG oslo_service.service [-] external_pids                  = /var/lib/neutron/external/pids log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.834 107689 DEBUG oslo_service.service [-] filter_validation              = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.834 107689 DEBUG oslo_service.service [-] global_physnet_mtu             = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.834 107689 DEBUG oslo_service.service [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.834 107689 DEBUG oslo_service.service [-] host                           = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.834 107689 DEBUG oslo_service.service [-] http_retries                   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.834 107689 DEBUG oslo_service.service [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.834 107689 DEBUG oslo_service.service [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.834 107689 DEBUG oslo_service.service [-] ipam_driver                    = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.835 107689 DEBUG oslo_service.service [-] ipv6_pd_enabled                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.835 107689 DEBUG oslo_service.service [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.835 107689 DEBUG oslo_service.service [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.835 107689 DEBUG oslo_service.service [-] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.835 107689 DEBUG oslo_service.service [-] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.835 107689 DEBUG oslo_service.service [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.835 107689 DEBUG oslo_service.service [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.835 107689 DEBUG oslo_service.service [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.835 107689 DEBUG oslo_service.service [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.836 107689 DEBUG oslo_service.service [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.836 107689 DEBUG oslo_service.service [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.836 107689 DEBUG oslo_service.service [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.836 107689 DEBUG oslo_service.service [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.836 107689 DEBUG oslo_service.service [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.836 107689 DEBUG oslo_service.service [-] max_dns_nameservers            = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.836 107689 DEBUG oslo_service.service [-] max_header_line                = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.836 107689 DEBUG oslo_service.service [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.836 107689 DEBUG oslo_service.service [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.836 107689 DEBUG oslo_service.service [-] max_subnet_host_routes         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.837 107689 DEBUG oslo_service.service [-] metadata_backlog               = 4096 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.837 107689 DEBUG oslo_service.service [-] metadata_proxy_group           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.837 107689 DEBUG oslo_service.service [-] metadata_proxy_shared_secret   = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.837 107689 DEBUG oslo_service.service [-] metadata_proxy_socket          = /var/lib/neutron/metadata_proxy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.837 107689 DEBUG oslo_service.service [-] metadata_proxy_socket_mode     = deduce log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.837 107689 DEBUG oslo_service.service [-] metadata_proxy_user            =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.837 107689 DEBUG oslo_service.service [-] metadata_workers               = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.837 107689 DEBUG oslo_service.service [-] network_link_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.837 107689 DEBUG oslo_service.service [-] notify_nova_on_port_data_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.837 107689 DEBUG oslo_service.service [-] notify_nova_on_port_status_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.838 107689 DEBUG oslo_service.service [-] nova_client_cert               =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.838 107689 DEBUG oslo_service.service [-] nova_client_priv_key           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.838 107689 DEBUG oslo_service.service [-] nova_metadata_host             = nova-metadata-internal.openstack.svc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.838 107689 DEBUG oslo_service.service [-] nova_metadata_insecure         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.838 107689 DEBUG oslo_service.service [-] nova_metadata_port             = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.838 107689 DEBUG oslo_service.service [-] nova_metadata_protocol         = https log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.838 107689 DEBUG oslo_service.service [-] pagination_max_limit           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.838 107689 DEBUG oslo_service.service [-] periodic_fuzzy_delay           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.838 107689 DEBUG oslo_service.service [-] periodic_interval              = 40 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.839 107689 DEBUG oslo_service.service [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.839 107689 DEBUG oslo_service.service [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.839 107689 DEBUG oslo_service.service [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.839 107689 DEBUG oslo_service.service [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.839 107689 DEBUG oslo_service.service [-] retry_until_window             = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.839 107689 DEBUG oslo_service.service [-] rpc_resources_processing_step  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.839 107689 DEBUG oslo_service.service [-] rpc_response_max_timeout       = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.839 107689 DEBUG oslo_service.service [-] rpc_state_report_workers       = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.839 107689 DEBUG oslo_service.service [-] rpc_workers                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.839 107689 DEBUG oslo_service.service [-] send_events_interval           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.840 107689 DEBUG oslo_service.service [-] service_plugins                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.840 107689 DEBUG oslo_service.service [-] setproctitle                   = on log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.840 107689 DEBUG oslo_service.service [-] state_path                     = /var/lib/neutron log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.840 107689 DEBUG oslo_service.service [-] syslog_log_facility            = syslog log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.840 107689 DEBUG oslo_service.service [-] tcp_keepidle                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.840 107689 DEBUG oslo_service.service [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.840 107689 DEBUG oslo_service.service [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.840 107689 DEBUG oslo_service.service [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.840 107689 DEBUG oslo_service.service [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.840 107689 DEBUG oslo_service.service [-] use_ssl                        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.841 107689 DEBUG oslo_service.service [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.841 107689 DEBUG oslo_service.service [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.841 107689 DEBUG oslo_service.service [-] vlan_transparent               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.841 107689 DEBUG oslo_service.service [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.841 107689 DEBUG oslo_service.service [-] wsgi_default_pool_size         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.841 107689 DEBUG oslo_service.service [-] wsgi_keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.841 107689 DEBUG oslo_service.service [-] wsgi_log_format                = %(client_ip)s "%(request_line)s" status: %(status_code)s  len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.841 107689 DEBUG oslo_service.service [-] wsgi_server_debug              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.841 107689 DEBUG oslo_service.service [-] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.842 107689 DEBUG oslo_service.service [-] oslo_concurrency.lock_path     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.842 107689 DEBUG oslo_service.service [-] profiler.connection_string     = messaging:// log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.842 107689 DEBUG oslo_service.service [-] profiler.enabled               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.842 107689 DEBUG oslo_service.service [-] profiler.es_doc_type           = notification log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.842 107689 DEBUG oslo_service.service [-] profiler.es_scroll_size        = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.842 107689 DEBUG oslo_service.service [-] profiler.es_scroll_time        = 2m log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.842 107689 DEBUG oslo_service.service [-] profiler.filter_error_trace    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.843 107689 DEBUG oslo_service.service [-] profiler.hmac_keys             = SECRET_KEY log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.843 107689 DEBUG oslo_service.service [-] profiler.sentinel_service_name = mymaster log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.843 107689 DEBUG oslo_service.service [-] profiler.socket_timeout        = 0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.843 107689 DEBUG oslo_service.service [-] profiler.trace_sqlalchemy      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.843 107689 DEBUG oslo_service.service [-] oslo_policy.enforce_new_defaults = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.843 107689 DEBUG oslo_service.service [-] oslo_policy.enforce_scope      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.843 107689 DEBUG oslo_service.service [-] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.843 107689 DEBUG oslo_service.service [-] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.844 107689 DEBUG oslo_service.service [-] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.844 107689 DEBUG oslo_service.service [-] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.844 107689 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.844 107689 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.844 107689 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.844 107689 DEBUG oslo_service.service [-] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.844 107689 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.844 107689 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.844 107689 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.844 107689 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.845 107689 DEBUG oslo_service.service [-] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.845 107689 DEBUG oslo_service.service [-] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.845 107689 DEBUG oslo_service.service [-] service_providers.service_provider = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.845 107689 DEBUG oslo_service.service [-] privsep.capabilities           = [21, 12, 1, 2, 19] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.845 107689 DEBUG oslo_service.service [-] privsep.group                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.845 107689 DEBUG oslo_service.service [-] privsep.helper_command         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.845 107689 DEBUG oslo_service.service [-] privsep.logger_name            = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.845 107689 DEBUG oslo_service.service [-] privsep.thread_pool_size       = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.845 107689 DEBUG oslo_service.service [-] privsep.user                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.846 107689 DEBUG oslo_service.service [-] privsep_dhcp_release.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.846 107689 DEBUG oslo_service.service [-] privsep_dhcp_release.group     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.846 107689 DEBUG oslo_service.service [-] privsep_dhcp_release.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.846 107689 DEBUG oslo_service.service [-] privsep_dhcp_release.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.846 107689 DEBUG oslo_service.service [-] privsep_dhcp_release.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.846 107689 DEBUG oslo_service.service [-] privsep_dhcp_release.user      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.846 107689 DEBUG oslo_service.service [-] privsep_ovs_vsctl.capabilities = [21, 12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.846 107689 DEBUG oslo_service.service [-] privsep_ovs_vsctl.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.846 107689 DEBUG oslo_service.service [-] privsep_ovs_vsctl.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.847 107689 DEBUG oslo_service.service [-] privsep_ovs_vsctl.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.847 107689 DEBUG oslo_service.service [-] privsep_ovs_vsctl.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.847 107689 DEBUG oslo_service.service [-] privsep_ovs_vsctl.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.847 107689 DEBUG oslo_service.service [-] privsep_namespace.capabilities = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.847 107689 DEBUG oslo_service.service [-] privsep_namespace.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.847 107689 DEBUG oslo_service.service [-] privsep_namespace.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.847 107689 DEBUG oslo_service.service [-] privsep_namespace.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.847 107689 DEBUG oslo_service.service [-] privsep_namespace.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.847 107689 DEBUG oslo_service.service [-] privsep_namespace.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.847 107689 DEBUG oslo_service.service [-] privsep_conntrack.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.847 107689 DEBUG oslo_service.service [-] privsep_conntrack.group        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.848 107689 DEBUG oslo_service.service [-] privsep_conntrack.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.848 107689 DEBUG oslo_service.service [-] privsep_conntrack.logger_name  = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.848 107689 DEBUG oslo_service.service [-] privsep_conntrack.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.848 107689 DEBUG oslo_service.service [-] privsep_conntrack.user         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.848 107689 DEBUG oslo_service.service [-] privsep_link.capabilities      = [12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.848 107689 DEBUG oslo_service.service [-] privsep_link.group             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.848 107689 DEBUG oslo_service.service [-] privsep_link.helper_command    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.848 107689 DEBUG oslo_service.service [-] privsep_link.logger_name       = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.848 107689 DEBUG oslo_service.service [-] privsep_link.thread_pool_size  = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.848 107689 DEBUG oslo_service.service [-] privsep_link.user              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.849 107689 DEBUG oslo_service.service [-] AGENT.check_child_processes_action = respawn log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.849 107689 DEBUG oslo_service.service [-] AGENT.check_child_processes_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.849 107689 DEBUG oslo_service.service [-] AGENT.comment_iptables_rules   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.849 107689 DEBUG oslo_service.service [-] AGENT.debug_iptables_rules     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.849 107689 DEBUG oslo_service.service [-] AGENT.kill_scripts_path        = /etc/neutron/kill_scripts/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.849 107689 DEBUG oslo_service.service [-] AGENT.root_helper              = sudo neutron-rootwrap /etc/neutron/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.849 107689 DEBUG oslo_service.service [-] AGENT.root_helper_daemon       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.849 107689 DEBUG oslo_service.service [-] AGENT.use_helper_for_ns_read   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.849 107689 DEBUG oslo_service.service [-] AGENT.use_random_fully         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.850 107689 DEBUG oslo_service.service [-] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.850 107689 DEBUG oslo_service.service [-] QUOTAS.default_quota           = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.850 107689 DEBUG oslo_service.service [-] QUOTAS.quota_driver            = neutron.db.quota.driver_nolock.DbQuotaNoLockDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.850 107689 DEBUG oslo_service.service [-] QUOTAS.quota_network           = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.850 107689 DEBUG oslo_service.service [-] QUOTAS.quota_port              = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.850 107689 DEBUG oslo_service.service [-] QUOTAS.quota_security_group    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.850 107689 DEBUG oslo_service.service [-] QUOTAS.quota_security_group_rule = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.850 107689 DEBUG oslo_service.service [-] QUOTAS.quota_subnet            = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.850 107689 DEBUG oslo_service.service [-] QUOTAS.track_quota_usage       = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.851 107689 DEBUG oslo_service.service [-] nova.auth_section              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.851 107689 DEBUG oslo_service.service [-] nova.auth_type                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.851 107689 DEBUG oslo_service.service [-] nova.cafile                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.851 107689 DEBUG oslo_service.service [-] nova.certfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.851 107689 DEBUG oslo_service.service [-] nova.collect_timing            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.851 107689 DEBUG oslo_service.service [-] nova.endpoint_type             = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.851 107689 DEBUG oslo_service.service [-] nova.insecure                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.851 107689 DEBUG oslo_service.service [-] nova.keyfile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.851 107689 DEBUG oslo_service.service [-] nova.region_name               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.851 107689 DEBUG oslo_service.service [-] nova.split_loggers             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.852 107689 DEBUG oslo_service.service [-] nova.timeout                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.852 107689 DEBUG oslo_service.service [-] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.852 107689 DEBUG oslo_service.service [-] placement.auth_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.852 107689 DEBUG oslo_service.service [-] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.852 107689 DEBUG oslo_service.service [-] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.852 107689 DEBUG oslo_service.service [-] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.852 107689 DEBUG oslo_service.service [-] placement.endpoint_type        = public log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.852 107689 DEBUG oslo_service.service [-] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.852 107689 DEBUG oslo_service.service [-] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.852 107689 DEBUG oslo_service.service [-] placement.region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.853 107689 DEBUG oslo_service.service [-] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.853 107689 DEBUG oslo_service.service [-] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.853 107689 DEBUG oslo_service.service [-] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.853 107689 DEBUG oslo_service.service [-] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.853 107689 DEBUG oslo_service.service [-] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.853 107689 DEBUG oslo_service.service [-] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.853 107689 DEBUG oslo_service.service [-] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.853 107689 DEBUG oslo_service.service [-] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.853 107689 DEBUG oslo_service.service [-] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.853 107689 DEBUG oslo_service.service [-] ironic.enable_notifications    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.854 107689 DEBUG oslo_service.service [-] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.854 107689 DEBUG oslo_service.service [-] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.854 107689 DEBUG oslo_service.service [-] ironic.interface               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.854 107689 DEBUG oslo_service.service [-] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.854 107689 DEBUG oslo_service.service [-] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.854 107689 DEBUG oslo_service.service [-] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.854 107689 DEBUG oslo_service.service [-] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.854 107689 DEBUG oslo_service.service [-] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.854 107689 DEBUG oslo_service.service [-] ironic.service_type            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.854 107689 DEBUG oslo_service.service [-] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.855 107689 DEBUG oslo_service.service [-] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.855 107689 DEBUG oslo_service.service [-] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.855 107689 DEBUG oslo_service.service [-] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.855 107689 DEBUG oslo_service.service [-] ironic.valid_interfaces        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.855 107689 DEBUG oslo_service.service [-] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.855 107689 DEBUG oslo_service.service [-] cli_script.dry_run             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.855 107689 DEBUG oslo_service.service [-] ovn.allow_stateless_action_supported = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.855 107689 DEBUG oslo_service.service [-] ovn.dhcp_default_lease_time    = 43200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.855 107689 DEBUG oslo_service.service [-] ovn.disable_ovn_dhcp_for_baremetal_ports = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.856 107689 DEBUG oslo_service.service [-] ovn.dns_servers                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.856 107689 DEBUG oslo_service.service [-] ovn.enable_distributed_floating_ip = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.856 107689 DEBUG oslo_service.service [-] ovn.neutron_sync_mode          = log log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.856 107689 DEBUG oslo_service.service [-] ovn.ovn_dhcp4_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.856 107689 DEBUG oslo_service.service [-] ovn.ovn_dhcp6_global_options   = {} log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.856 107689 DEBUG oslo_service.service [-] ovn.ovn_emit_need_to_frag      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.856 107689 DEBUG oslo_service.service [-] ovn.ovn_l3_mode                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.856 107689 DEBUG oslo_service.service [-] ovn.ovn_l3_scheduler           = leastloaded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.856 107689 DEBUG oslo_service.service [-] ovn.ovn_metadata_enabled       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.856 107689 DEBUG oslo_service.service [-] ovn.ovn_nb_ca_cert             =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.857 107689 DEBUG oslo_service.service [-] ovn.ovn_nb_certificate         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.857 107689 DEBUG oslo_service.service [-] ovn.ovn_nb_connection          = tcp:127.0.0.1:6641 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.857 107689 DEBUG oslo_service.service [-] ovn.ovn_nb_private_key         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.857 107689 DEBUG oslo_service.service [-] ovn.ovn_sb_ca_cert             = /etc/pki/tls/certs/ovndbca.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.857 107689 DEBUG oslo_service.service [-] ovn.ovn_sb_certificate         = /etc/pki/tls/certs/ovndb.crt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.857 107689 DEBUG oslo_service.service [-] ovn.ovn_sb_connection          = ssl:ovsdbserver-sb.openstack.svc:6642 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.857 107689 DEBUG oslo_service.service [-] ovn.ovn_sb_private_key         = /etc/pki/tls/private/ovndb.key log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.857 107689 DEBUG oslo_service.service [-] ovn.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.857 107689 DEBUG oslo_service.service [-] ovn.ovsdb_log_level            = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.857 107689 DEBUG oslo_service.service [-] ovn.ovsdb_probe_interval       = 60000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.858 107689 DEBUG oslo_service.service [-] ovn.ovsdb_retry_max_interval   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.858 107689 DEBUG oslo_service.service [-] ovn.vhost_sock_dir             = /var/run/openvswitch log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.858 107689 DEBUG oslo_service.service [-] ovn.vif_type                   = ovs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.858 107689 DEBUG oslo_service.service [-] OVS.bridge_mac_table_size      = 50000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.858 107689 DEBUG oslo_service.service [-] OVS.igmp_snooping_enable       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.858 107689 DEBUG oslo_service.service [-] OVS.ovsdb_timeout              = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.858 107689 DEBUG oslo_service.service [-] ovs.ovsdb_connection           = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.858 107689 DEBUG oslo_service.service [-] ovs.ovsdb_connection_timeout   = 180 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.858 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.859 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.859 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.859 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.859 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.859 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.859 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.859 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.859 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.859 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.860 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.860 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.860 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.860 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.860 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.860 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.860 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.860 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.860 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.860 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.861 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.861 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.861 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.861 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.861 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.861 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.861 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.861 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.861 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.862 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.862 107689 DEBUG oslo_service.service [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.862 107689 DEBUG oslo_service.service [-] oslo_messaging_notifications.driver = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.862 107689 DEBUG oslo_service.service [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.862 107689 DEBUG oslo_service.service [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.862 107689 DEBUG oslo_service.service [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:47:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:47:44.862 107689 DEBUG oslo_service.service [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 05 20:47:50 compute-0 sshd-session[108236]: Accepted publickey for zuul from 192.168.122.30 port 40654 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:47:50 compute-0 systemd-logind[788]: New session 22 of user zuul.
Jan 05 20:47:50 compute-0 systemd[1]: Started Session 22 of User zuul.
Jan 05 20:47:50 compute-0 sshd-session[108236]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:47:51 compute-0 python3.9[108389]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:47:52 compute-0 sudo[108543]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vbxveubpvgyfnfkmilnaoxkidxbkpbyw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646072.2063856-34-176322799100360/AnsiballZ_command.py'
Jan 05 20:47:52 compute-0 sudo[108543]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:53 compute-0 python3.9[108545]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --filter name=^nova_virtlogd$ --format \{\{.Names\}\} _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:47:53 compute-0 sudo[108543]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:54 compute-0 sudo[108708]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hiffsfxcpnundkcflcjjutrpdommnmzt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646073.5534523-45-203199092069563/AnsiballZ_systemd_service.py'
Jan 05 20:47:54 compute-0 sudo[108708]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:47:54 compute-0 python3.9[108710]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 20:47:54 compute-0 systemd[1]: Reloading.
Jan 05 20:47:54 compute-0 systemd-rc-local-generator[108738]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:47:54 compute-0 systemd-sysv-generator[108743]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:47:55 compute-0 sudo[108708]: pam_unix(sudo:session): session closed for user root
Jan 05 20:47:56 compute-0 python3.9[108895]: ansible-ansible.builtin.service_facts Invoked
Jan 05 20:47:56 compute-0 network[108912]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 05 20:47:56 compute-0 network[108913]: 'network-scripts' will be removed from distribution in near future.
Jan 05 20:47:56 compute-0 network[108914]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 05 20:48:01 compute-0 sshd-session[108974]: Invalid user user from 43.226.60.137 port 40974
Jan 05 20:48:01 compute-0 sshd-session[108974]: Connection closed by invalid user user 43.226.60.137 port 40974 [preauth]
Jan 05 20:48:01 compute-0 sudo[109175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjtiritzmbcyzjcoriqtzqwwdjtmjjjm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646081.2047539-64-127482637157889/AnsiballZ_systemd_service.py'
Jan 05 20:48:01 compute-0 sudo[109175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:01 compute-0 python3.9[109177]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_libvirt.target state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:48:02 compute-0 sudo[109175]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:02 compute-0 sudo[109328]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcdmhfibgyfgddlufivhjhjhvdoldwgi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646082.2041395-64-273682267169598/AnsiballZ_systemd_service.py'
Jan 05 20:48:02 compute-0 sudo[109328]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:02 compute-0 python3.9[109330]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtlogd_wrapper.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:48:02 compute-0 sudo[109328]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:03 compute-0 sudo[109494]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orerwyzpquwuvqagrsenryijhqcdnhmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646083.27525-64-252778300892775/AnsiballZ_systemd_service.py'
Jan 05 20:48:03 compute-0 sudo[109494]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:03 compute-0 podman[109455]: 2026-01-05 20:48:03.802908086 +0000 UTC m=+0.171852989 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 05 20:48:03 compute-0 python3.9[109503]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtnodedevd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:48:04 compute-0 sudo[109494]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:04 compute-0 sudo[109660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-daqxkpryvxnxxegctkeuvidyftorrurk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646084.1892178-64-2292062670908/AnsiballZ_systemd_service.py'
Jan 05 20:48:04 compute-0 sudo[109660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:04 compute-0 python3.9[109662]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtproxyd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:48:04 compute-0 sudo[109660]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:05 compute-0 sudo[109813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwekhlyznorgirnsoeahtkqoziyyxtto ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646085.176252-64-82548560982827/AnsiballZ_systemd_service.py'
Jan 05 20:48:05 compute-0 sudo[109813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:05 compute-0 python3.9[109815]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtqemud.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:48:05 compute-0 sudo[109813]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:06 compute-0 sudo[109966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyxhnvpgwlazlutsbrqrysvzncfrqyid ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646086.0936587-64-102322827989692/AnsiballZ_systemd_service.py'
Jan 05 20:48:06 compute-0 sudo[109966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:06 compute-0 python3.9[109968]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtsecretd.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:48:06 compute-0 sudo[109966]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:07 compute-0 sudo[110119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbxaeuglkoegoftlzokaoppvleobelcc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646086.921975-64-138011741093823/AnsiballZ_systemd_service.py'
Jan 05 20:48:07 compute-0 sudo[110119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:07 compute-0 python3.9[110121]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_virtstoraged.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:48:07 compute-0 sudo[110119]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:08 compute-0 sudo[110272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngfvnrdkcjynjxidwlbbmyralicwikay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646088.1080785-116-99378462480845/AnsiballZ_file.py'
Jan 05 20:48:08 compute-0 sudo[110272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:08 compute-0 python3.9[110274]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:48:08 compute-0 sudo[110272]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:09 compute-0 sudo[110424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-psrajzgxvhjhkvnhecejjhwrgkpkxspm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646089.0328329-116-254012705367616/AnsiballZ_file.py'
Jan 05 20:48:09 compute-0 sudo[110424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:09 compute-0 python3.9[110426]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:48:09 compute-0 sudo[110424]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:10 compute-0 sudo[110576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rckahxpwstvcdjgcnfzjvcbvdcecveei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646089.7812016-116-102134877257049/AnsiballZ_file.py'
Jan 05 20:48:10 compute-0 sudo[110576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:10 compute-0 python3.9[110578]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:48:10 compute-0 sudo[110576]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:10 compute-0 sudo[110728]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xrnmkkrpgmxbugvcssutayedpfdmwmrq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646090.5631664-116-262003691062507/AnsiballZ_file.py'
Jan 05 20:48:10 compute-0 sudo[110728]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:11 compute-0 python3.9[110730]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:48:11 compute-0 sudo[110728]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:11 compute-0 sudo[110892]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-erwwokeoymsnfsfepauznihfzbsxktih ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646091.3567789-116-210707014664949/AnsiballZ_file.py'
Jan 05 20:48:11 compute-0 sudo[110892]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:11 compute-0 podman[110854]: 2026-01-05 20:48:11.742588465 +0000 UTC m=+0.087484687 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 05 20:48:11 compute-0 python3.9[110898]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:48:11 compute-0 sudo[110892]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:12 compute-0 sudo[111052]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zkeflrjgaktogwytywsejboehorvohlh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646092.0798001-116-63389081908431/AnsiballZ_file.py'
Jan 05 20:48:12 compute-0 sudo[111052]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:12 compute-0 python3.9[111054]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:48:12 compute-0 sudo[111052]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:13 compute-0 sudo[111204]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vhcztnqhkapowyexruhirwmzbmymrskd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646092.7993925-116-279382401049086/AnsiballZ_file.py'
Jan 05 20:48:13 compute-0 sudo[111204]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:13 compute-0 python3.9[111206]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:48:13 compute-0 sudo[111204]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:14 compute-0 sudo[111356]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpovhpznjhqcnqvprlrvgoxnchmwvufp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646093.6874392-166-120522931453053/AnsiballZ_file.py'
Jan 05 20:48:14 compute-0 sudo[111356]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:14 compute-0 python3.9[111358]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_libvirt.target state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:48:14 compute-0 sudo[111356]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:14 compute-0 sudo[111508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dctskfvkqnekbynbjfygibmpfdqhzvrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646094.4514146-166-133446402484823/AnsiballZ_file.py'
Jan 05 20:48:14 compute-0 sudo[111508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:15 compute-0 python3.9[111510]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtlogd_wrapper.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:48:15 compute-0 sudo[111508]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:15 compute-0 sudo[111660]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbhdfydxrsnsyqtpsqabckdbpbjzsmbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646095.2564123-166-84872006760122/AnsiballZ_file.py'
Jan 05 20:48:15 compute-0 sudo[111660]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:15 compute-0 python3.9[111662]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtnodedevd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:48:15 compute-0 sudo[111660]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:16 compute-0 sudo[111812]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ivyxypnzzvecmnpfxmxtvrcjwxkolwqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646095.9486482-166-163619780898726/AnsiballZ_file.py'
Jan 05 20:48:16 compute-0 sudo[111812]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:16 compute-0 python3.9[111814]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtproxyd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:48:16 compute-0 sudo[111812]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:17 compute-0 sudo[111964]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwvehpdbswxmrzparcxjtfumwjnxwaua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646096.7267268-166-38636412289643/AnsiballZ_file.py'
Jan 05 20:48:17 compute-0 sudo[111964]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:17 compute-0 python3.9[111966]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtqemud.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:48:17 compute-0 sudo[111964]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:17 compute-0 sudo[112116]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlvviffqwzpazahdjdvvhrowrguioxav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646097.5165582-166-150283113402016/AnsiballZ_file.py'
Jan 05 20:48:17 compute-0 sudo[112116]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:18 compute-0 python3.9[112118]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtsecretd.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:48:18 compute-0 sudo[112116]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:18 compute-0 sudo[112268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqjevhmdyqufzmvwxlhgcrwwokavtyqp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646098.313576-166-71047615374428/AnsiballZ_file.py'
Jan 05 20:48:18 compute-0 sudo[112268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:18 compute-0 python3.9[112270]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_virtstoraged.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:48:18 compute-0 sudo[112268]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:19 compute-0 sudo[112420]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odewaqrdgogxebjpuywyuloxvghuhyzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646099.2466514-217-118321385716384/AnsiballZ_command.py'
Jan 05 20:48:19 compute-0 sudo[112420]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:19 compute-0 python3.9[112422]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:48:19 compute-0 sudo[112420]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:20 compute-0 python3.9[112574]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 05 20:48:21 compute-0 sudo[112724]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-undgbeuqcxixvwzgpnvxanrxaidxjwev ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646101.2055752-235-136241780589705/AnsiballZ_systemd_service.py'
Jan 05 20:48:21 compute-0 sudo[112724]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:21 compute-0 python3.9[112726]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 20:48:21 compute-0 systemd[1]: Reloading.
Jan 05 20:48:22 compute-0 systemd-rc-local-generator[112752]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:48:22 compute-0 systemd-sysv-generator[112757]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:48:22 compute-0 sudo[112724]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:22 compute-0 sudo[112912]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxjypknzhwqslqmllkaormfyihobxyfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646102.4912388-243-196684802123814/AnsiballZ_command.py'
Jan 05 20:48:22 compute-0 sudo[112912]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:23 compute-0 python3.9[112914]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_libvirt.target _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:48:23 compute-0 sudo[112912]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:23 compute-0 sudo[113065]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dhuzqynrkymgkomyhpkwdbxacmeibgvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646103.332855-243-246005233359517/AnsiballZ_command.py'
Jan 05 20:48:23 compute-0 sudo[113065]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:23 compute-0 python3.9[113067]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtlogd_wrapper.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:48:24 compute-0 sudo[113065]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:25 compute-0 sudo[113218]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ooromprybcvpnnrhkswckcaxhmkolipl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646105.1729193-243-256803885647364/AnsiballZ_command.py'
Jan 05 20:48:25 compute-0 sudo[113218]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:25 compute-0 python3.9[113220]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtnodedevd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:48:25 compute-0 sudo[113218]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:26 compute-0 sudo[113371]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbuewvoxptkjzcahnxvhpuzfjdqrvhaa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646105.9273655-243-52916391464172/AnsiballZ_command.py'
Jan 05 20:48:26 compute-0 sudo[113371]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:26 compute-0 python3.9[113373]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtproxyd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:48:26 compute-0 sudo[113371]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:27 compute-0 sudo[113524]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amwgpswfzngawlwqfzffqghnsjqmqmmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646106.682859-243-79231728484641/AnsiballZ_command.py'
Jan 05 20:48:27 compute-0 sudo[113524]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:27 compute-0 python3.9[113526]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtqemud.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:48:27 compute-0 sudo[113524]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:27 compute-0 sudo[113677]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcrzwdbkgsxzlcjxubtikfayfiskxdjn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646107.4401615-243-169555870704329/AnsiballZ_command.py'
Jan 05 20:48:27 compute-0 sudo[113677]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:28 compute-0 python3.9[113679]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtsecretd.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:48:28 compute-0 sudo[113677]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:28 compute-0 sudo[113830]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-weoydmaefvovjizhdfxhjpvmhudifdxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646108.2373617-243-195687804512557/AnsiballZ_command.py'
Jan 05 20:48:28 compute-0 sudo[113830]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:28 compute-0 python3.9[113832]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_virtstoraged.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:48:28 compute-0 sudo[113830]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:29 compute-0 sudo[113983]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ilmbpeastyubkhqmkpdevqrxttiyqfrt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646109.3778262-297-92087205073003/AnsiballZ_getent.py'
Jan 05 20:48:29 compute-0 sudo[113983]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:30 compute-0 python3.9[113985]: ansible-ansible.builtin.getent Invoked with database=passwd key=libvirt fail_key=True service=None split=None
Jan 05 20:48:30 compute-0 sudo[113983]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:31 compute-0 sudo[114136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wiofhyxdoybswglyjsizbagtmyzqvusi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646110.445652-305-235023034341731/AnsiballZ_group.py'
Jan 05 20:48:31 compute-0 sudo[114136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:31 compute-0 python3.9[114138]: ansible-ansible.builtin.group Invoked with gid=42473 name=libvirt state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 05 20:48:31 compute-0 groupadd[114139]: group added to /etc/group: name=libvirt, GID=42473
Jan 05 20:48:31 compute-0 groupadd[114139]: group added to /etc/gshadow: name=libvirt
Jan 05 20:48:31 compute-0 groupadd[114139]: new group: name=libvirt, GID=42473
Jan 05 20:48:31 compute-0 sudo[114136]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:32 compute-0 sudo[114296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-donpflrtyicjsmdvzaqqxpmdqozspdyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646111.5199342-313-128285465558768/AnsiballZ_user.py'
Jan 05 20:48:32 compute-0 sudo[114296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:32 compute-0 python3.9[114298]: ansible-ansible.builtin.user Invoked with comment=libvirt user group=libvirt groups=[''] name=libvirt shell=/sbin/nologin state=present uid=42473 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 05 20:48:32 compute-0 useradd[114300]: new user: name=libvirt, UID=42473, GID=42473, home=/home/libvirt, shell=/sbin/nologin, from=/dev/pts/0
Jan 05 20:48:32 compute-0 sudo[114296]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:33 compute-0 sshd-session[114221]: Connection closed by authenticating user root 43.226.60.137 port 38450 [preauth]
Jan 05 20:48:33 compute-0 sudo[114456]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jmxfmhcdvtbktttvdnuereibdatskmjv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646112.9537644-324-175102389345622/AnsiballZ_setup.py'
Jan 05 20:48:33 compute-0 sudo[114456]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:33 compute-0 python3.9[114458]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 20:48:33 compute-0 sudo[114456]: pam_unix(sudo:session): session closed for user root
Jan 05 20:48:34 compute-0 sudo[114550]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srlsjrpgyulrhiiwnkwekdqkyjovraut ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646112.9537644-324-175102389345622/AnsiballZ_dnf.py'
Jan 05 20:48:34 compute-0 sudo[114550]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:48:34 compute-0 podman[114514]: 2026-01-05 20:48:34.615211146 +0000 UTC m=+0.156763877 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 05 20:48:34 compute-0 python3.9[114561]: ansible-ansible.legacy.dnf Invoked with name=['libvirt ', 'libvirt-admin ', 'libvirt-client ', 'libvirt-daemon ', 'qemu-kvm', 'qemu-img', 'libguestfs', 'libseccomp', 'swtpm', 'swtpm-tools', 'edk2-ovmf', 'ceph-common', 'cyrus-sasl-scram'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:48:42 compute-0 podman[114586]: 2026-01-05 20:48:42.728290704 +0000 UTC m=+0.081638448 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 05 20:48:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:48:42.813 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:48:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:48:42.815 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:48:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:48:42.815 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:49:05 compute-0 kernel: SELinux:  Converting 2760 SID table entries...
Jan 05 20:49:05 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 20:49:05 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 05 20:49:05 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 20:49:05 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 05 20:49:05 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 20:49:05 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 20:49:05 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 20:49:05 compute-0 podman[114790]: 2026-01-05 20:49:05.827579184 +0000 UTC m=+0.170355387 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 05 20:49:05 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=10 res=1
Jan 05 20:49:13 compute-0 podman[114818]: 2026-01-05 20:49:13.780033826 +0000 UTC m=+0.107680538 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 05 20:49:14 compute-0 sshd-session[114782]: ssh_dispatch_run_fatal: Connection from 43.226.60.137 port 34720: Connection timed out [preauth]
Jan 05 20:49:15 compute-0 kernel: SELinux:  Converting 2760 SID table entries...
Jan 05 20:49:15 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 20:49:15 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 05 20:49:15 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 20:49:15 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 05 20:49:15 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 20:49:15 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 20:49:15 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 20:49:36 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=11 res=1
Jan 05 20:49:36 compute-0 podman[118838]: 2026-01-05 20:49:36.885403917 +0000 UTC m=+0.197994669 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 05 20:49:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:49:42.814 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:49:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:49:42.815 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:49:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:49:42.815 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:49:44 compute-0 podman[122544]: 2026-01-05 20:49:44.752519384 +0000 UTC m=+0.083638570 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true)
Jan 05 20:50:07 compute-0 podman[131728]: 2026-01-05 20:50:07.764216618 +0000 UTC m=+0.118999468 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ovn_controller, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 05 20:50:15 compute-0 podman[131770]: 2026-01-05 20:50:15.747041582 +0000 UTC m=+0.088970701 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 05 20:50:19 compute-0 kernel: SELinux:  Converting 2761 SID table entries...
Jan 05 20:50:19 compute-0 kernel: SELinux:  policy capability network_peer_controls=1
Jan 05 20:50:19 compute-0 kernel: SELinux:  policy capability open_perms=1
Jan 05 20:50:19 compute-0 kernel: SELinux:  policy capability extended_socket_class=1
Jan 05 20:50:19 compute-0 kernel: SELinux:  policy capability always_check_network=0
Jan 05 20:50:19 compute-0 kernel: SELinux:  policy capability cgroup_seclabel=1
Jan 05 20:50:19 compute-0 kernel: SELinux:  policy capability nnp_nosuid_transition=1
Jan 05 20:50:19 compute-0 kernel: SELinux:  policy capability genfs_seclabel_symlinks=1
Jan 05 20:50:20 compute-0 groupadd[131801]: group added to /etc/group: name=dnsmasq, GID=993
Jan 05 20:50:20 compute-0 groupadd[131801]: group added to /etc/gshadow: name=dnsmasq
Jan 05 20:50:20 compute-0 groupadd[131801]: new group: name=dnsmasq, GID=993
Jan 05 20:50:20 compute-0 useradd[131808]: new user: name=dnsmasq, UID=992, GID=993, home=/var/lib/dnsmasq, shell=/usr/sbin/nologin, from=none
Jan 05 20:50:20 compute-0 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Jan 05 20:50:20 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=12 res=1
Jan 05 20:50:20 compute-0 dbus-broker-launch[770]: Noticed file-system modification, trigger reload.
Jan 05 20:50:21 compute-0 groupadd[131821]: group added to /etc/group: name=clevis, GID=992
Jan 05 20:50:21 compute-0 groupadd[131821]: group added to /etc/gshadow: name=clevis
Jan 05 20:50:21 compute-0 groupadd[131821]: new group: name=clevis, GID=992
Jan 05 20:50:21 compute-0 useradd[131828]: new user: name=clevis, UID=991, GID=992, home=/var/cache/clevis, shell=/usr/sbin/nologin, from=none
Jan 05 20:50:21 compute-0 usermod[131838]: add 'clevis' to group 'tss'
Jan 05 20:50:21 compute-0 usermod[131838]: add 'clevis' to shadow group 'tss'
Jan 05 20:50:24 compute-0 polkitd[44036]: Reloading rules
Jan 05 20:50:24 compute-0 polkitd[44036]: Collecting garbage unconditionally...
Jan 05 20:50:24 compute-0 polkitd[44036]: Loading rules from directory /etc/polkit-1/rules.d
Jan 05 20:50:24 compute-0 polkitd[44036]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 05 20:50:24 compute-0 polkitd[44036]: Finished loading, compiling and executing 3 rules
Jan 05 20:50:24 compute-0 polkitd[44036]: Reloading rules
Jan 05 20:50:24 compute-0 polkitd[44036]: Collecting garbage unconditionally...
Jan 05 20:50:24 compute-0 polkitd[44036]: Loading rules from directory /etc/polkit-1/rules.d
Jan 05 20:50:24 compute-0 polkitd[44036]: Loading rules from directory /usr/share/polkit-1/rules.d
Jan 05 20:50:24 compute-0 polkitd[44036]: Finished loading, compiling and executing 3 rules
Jan 05 20:50:25 compute-0 groupadd[132025]: group added to /etc/group: name=ceph, GID=167
Jan 05 20:50:25 compute-0 groupadd[132025]: group added to /etc/gshadow: name=ceph
Jan 05 20:50:25 compute-0 groupadd[132025]: new group: name=ceph, GID=167
Jan 05 20:50:25 compute-0 useradd[132031]: new user: name=ceph, UID=167, GID=167, home=/var/lib/ceph, shell=/sbin/nologin, from=none
Jan 05 20:50:29 compute-0 systemd[1]: Stopping OpenSSH server daemon...
Jan 05 20:50:29 compute-0 sshd[1007]: Received signal 15; terminating.
Jan 05 20:50:29 compute-0 systemd[1]: sshd.service: Deactivated successfully.
Jan 05 20:50:29 compute-0 systemd[1]: Stopped OpenSSH server daemon.
Jan 05 20:50:29 compute-0 systemd[1]: sshd.service: Consumed 3.356s CPU time, read 564.0K from disk, written 12.0K to disk.
Jan 05 20:50:29 compute-0 systemd[1]: Stopped target sshd-keygen.target.
Jan 05 20:50:29 compute-0 systemd[1]: Stopping sshd-keygen.target...
Jan 05 20:50:29 compute-0 systemd[1]: OpenSSH ecdsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 05 20:50:29 compute-0 systemd[1]: OpenSSH ed25519 Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 05 20:50:29 compute-0 systemd[1]: OpenSSH rsa Server Key Generation was skipped because of an unmet condition check (ConditionPathExists=!/run/systemd/generator.early/multi-user.target.wants/cloud-init.target).
Jan 05 20:50:29 compute-0 systemd[1]: Reached target sshd-keygen.target.
Jan 05 20:50:29 compute-0 systemd[1]: Starting OpenSSH server daemon...
Jan 05 20:50:29 compute-0 sshd[132550]: Server listening on 0.0.0.0 port 22.
Jan 05 20:50:29 compute-0 sshd[132550]: Server listening on :: port 22.
Jan 05 20:50:29 compute-0 systemd[1]: Started OpenSSH server daemon.
Jan 05 20:50:32 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 20:50:32 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 05 20:50:32 compute-0 systemd[1]: Reloading.
Jan 05 20:50:32 compute-0 systemd-rc-local-generator[132806]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:50:32 compute-0 systemd-sysv-generator[132809]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:50:32 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 20:50:35 compute-0 sudo[114550]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:36 compute-0 sudo[136481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nsnpxhthdjmggjyemfvddlvovltsxlcw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646235.840024-336-194880552660337/AnsiballZ_systemd.py'
Jan 05 20:50:36 compute-0 sudo[136481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:36 compute-0 python3.9[136508]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 05 20:50:36 compute-0 systemd[1]: Reloading.
Jan 05 20:50:37 compute-0 systemd-rc-local-generator[136810]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:50:37 compute-0 systemd-sysv-generator[136816]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:50:37 compute-0 sudo[136481]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:37 compute-0 sudo[137484]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etwufafburamiepcgkjndfmgoojbsvos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646237.4112413-336-35361757890259/AnsiballZ_systemd.py'
Jan 05 20:50:37 compute-0 sudo[137484]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:38 compute-0 python3.9[137500]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 05 20:50:38 compute-0 systemd[1]: Reloading.
Jan 05 20:50:38 compute-0 systemd-sysv-generator[137889]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:50:38 compute-0 systemd-rc-local-generator[137883]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:50:38 compute-0 podman[137721]: 2026-01-05 20:50:38.267811794 +0000 UTC m=+0.171277351 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 20:50:38 compute-0 sudo[137484]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:39 compute-0 sudo[138640]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drnsltzsreoyshoxnbdqtpbjpvxoefqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646238.6998382-336-126502809748492/AnsiballZ_systemd.py'
Jan 05 20:50:39 compute-0 sudo[138640]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:39 compute-0 python3.9[138661]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=libvirtd-tls.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 05 20:50:39 compute-0 systemd[1]: Reloading.
Jan 05 20:50:39 compute-0 systemd-rc-local-generator[138982]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:50:39 compute-0 systemd-sysv-generator[138989]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:50:39 compute-0 sudo[138640]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:40 compute-0 sudo[139655]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwxmzgxailpaiqbythlxcljzonpqfzxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646239.977363-336-130219560958900/AnsiballZ_systemd.py'
Jan 05 20:50:40 compute-0 sudo[139655]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:40 compute-0 python3.9[139680]: ansible-ansible.builtin.systemd Invoked with enabled=False masked=True name=virtproxyd-tcp.socket state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 05 20:50:40 compute-0 systemd[1]: Reloading.
Jan 05 20:50:40 compute-0 systemd-sysv-generator[140072]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:50:40 compute-0 systemd-rc-local-generator[140066]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:50:41 compute-0 sudo[139655]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:41 compute-0 sudo[140837]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rbzesidjsaurlxwvytvaarfqfxynqago ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646241.3775256-365-16268742925438/AnsiballZ_systemd.py'
Jan 05 20:50:41 compute-0 sudo[140837]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:42 compute-0 python3.9[140859]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:50:42 compute-0 systemd[1]: Reloading.
Jan 05 20:50:42 compute-0 systemd-rc-local-generator[141183]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:50:42 compute-0 systemd-sysv-generator[141187]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:50:42 compute-0 sudo[140837]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:50:42.816 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:50:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:50:42.818 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:50:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:50:42.818 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:50:43 compute-0 sudo[141821]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kjahqjfphagpdrbfzglxwlhirhzujylt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646242.6967082-365-147553428290325/AnsiballZ_systemd.py'
Jan 05 20:50:43 compute-0 sudo[141821]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:43 compute-0 python3.9[141842]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:50:43 compute-0 systemd[1]: Reloading.
Jan 05 20:50:43 compute-0 systemd-rc-local-generator[142228]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:50:43 compute-0 systemd-sysv-generator[142231]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:50:43 compute-0 sudo[141821]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:43 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 20:50:43 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 20:50:43 compute-0 systemd[1]: man-db-cache-update.service: Consumed 15.032s CPU time.
Jan 05 20:50:43 compute-0 systemd[1]: run-ra2b59efbf34543818885ee1b81b61ce8.service: Deactivated successfully.
Jan 05 20:50:44 compute-0 sudo[142501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwcsfrqignibdxgucubcymbzcmwmzekg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646243.9944756-365-224673658970639/AnsiballZ_systemd.py'
Jan 05 20:50:44 compute-0 sudo[142501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:44 compute-0 python3.9[142503]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:50:44 compute-0 systemd[1]: Reloading.
Jan 05 20:50:45 compute-0 systemd-rc-local-generator[142535]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:50:45 compute-0 systemd-sysv-generator[142538]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:50:45 compute-0 sudo[142501]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:45 compute-0 sudo[142690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ituwpqannuuodwxyxtplbupyiekgzyur ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646245.4000654-365-280656266027592/AnsiballZ_systemd.py'
Jan 05 20:50:45 compute-0 sudo[142690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:45 compute-0 podman[142692]: 2026-01-05 20:50:45.971328211 +0000 UTC m=+0.083766788 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 05 20:50:46 compute-0 python3.9[142693]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:50:46 compute-0 sudo[142690]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:47 compute-0 sudo[142865]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dsetwqroghqnnzpwxydqiwdzxrdoxkhh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646246.6652853-365-60340852822007/AnsiballZ_systemd.py'
Jan 05 20:50:47 compute-0 sudo[142865]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:47 compute-0 python3.9[142867]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.service daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:50:47 compute-0 systemd[1]: Reloading.
Jan 05 20:50:47 compute-0 systemd-rc-local-generator[142897]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:50:47 compute-0 systemd-sysv-generator[142902]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:50:47 compute-0 sudo[142865]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:48 compute-0 sudo[143055]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ubshiroklixpuzjdcybtpeprwmipsdle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646248.1034737-401-90074963430592/AnsiballZ_systemd.py'
Jan 05 20:50:48 compute-0 sudo[143055]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:48 compute-0 python3.9[143057]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-tls.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None
Jan 05 20:50:50 compute-0 systemd[1]: Reloading.
Jan 05 20:50:50 compute-0 systemd-sysv-generator[143090]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:50:50 compute-0 systemd-rc-local-generator[143087]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:50:50 compute-0 systemd[1]: Listening on libvirt proxy daemon socket.
Jan 05 20:50:50 compute-0 systemd[1]: Listening on libvirt proxy daemon TLS IP socket.
Jan 05 20:50:50 compute-0 sudo[143055]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:51 compute-0 sudo[143247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sxlcstsgovavlswdzxqxowakhpmxtheg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646250.6168191-409-20220428216932/AnsiballZ_systemd.py'
Jan 05 20:50:51 compute-0 sudo[143247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:51 compute-0 python3.9[143249]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:50:51 compute-0 sudo[143247]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:52 compute-0 sudo[143402]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dryefbzogqtbathhletswdjvqqupelwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646251.7709389-409-202695579999271/AnsiballZ_systemd.py'
Jan 05 20:50:52 compute-0 sudo[143402]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:52 compute-0 python3.9[143404]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtlogd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:50:52 compute-0 sudo[143402]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:53 compute-0 sudo[143557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bwjbqplwejsqrsbkkbyvgeflqcughfdm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646252.8271165-409-207206419371741/AnsiballZ_systemd.py'
Jan 05 20:50:53 compute-0 sudo[143557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:53 compute-0 python3.9[143559]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:50:53 compute-0 sudo[143557]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:54 compute-0 sudo[143712]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obvhhyeyspyyqiokdwlfshfcshgqkrsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646253.9080555-409-52425350637634/AnsiballZ_systemd.py'
Jan 05 20:50:54 compute-0 sudo[143712]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:54 compute-0 python3.9[143714]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:50:54 compute-0 sudo[143712]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:55 compute-0 sudo[143867]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdujzgobexefnnmmjedmiozhjyimvhji ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646254.942084-409-130010576861515/AnsiballZ_systemd.py'
Jan 05 20:50:55 compute-0 sudo[143867]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:55 compute-0 python3.9[143869]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtnodedevd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:50:55 compute-0 sudo[143867]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:56 compute-0 sudo[144022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gxsgvbmejpjnpxwojobjuwhnddbspwrd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646255.9100156-409-28613099797010/AnsiballZ_systemd.py'
Jan 05 20:50:56 compute-0 sudo[144022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:56 compute-0 python3.9[144024]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:50:56 compute-0 sudo[144022]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:57 compute-0 sudo[144177]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fskdabnizhcpnrzodzwugdmzmkwncqwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646256.8434956-409-263618357186575/AnsiballZ_systemd.py'
Jan 05 20:50:57 compute-0 sudo[144177]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:57 compute-0 python3.9[144179]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:50:57 compute-0 sudo[144177]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:58 compute-0 sudo[144332]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kflxordeopkyzrwngjecgfpjeuanklsg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646257.784543-409-217947369842101/AnsiballZ_systemd.py'
Jan 05 20:50:58 compute-0 sudo[144332]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:58 compute-0 python3.9[144334]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtproxyd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:50:58 compute-0 sudo[144332]: pam_unix(sudo:session): session closed for user root
Jan 05 20:50:59 compute-0 sudo[144487]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jallnqolamlcaujffpuafsserwkvasud ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646258.8318706-409-104993092479961/AnsiballZ_systemd.py'
Jan 05 20:50:59 compute-0 sudo[144487]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:50:59 compute-0 python3.9[144489]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:50:59 compute-0 sudo[144487]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:00 compute-0 sudo[144642]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcosmdpwfccxfdkqrcbpgcqrfmvyduhk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646259.8201945-409-171137301143526/AnsiballZ_systemd.py'
Jan 05 20:51:00 compute-0 sudo[144642]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:00 compute-0 python3.9[144644]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:51:00 compute-0 sudo[144642]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:01 compute-0 sudo[144797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtsrgvomjrqvjntmpxmwhlbbtowecfoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646260.8112435-409-38747336996835/AnsiballZ_systemd.py'
Jan 05 20:51:01 compute-0 sudo[144797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:01 compute-0 python3.9[144799]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtqemud-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:51:01 compute-0 sudo[144797]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:02 compute-0 sudo[144952]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iewauwqsdovzpdltkpvtblfesnpldltb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646261.9690957-409-132457854682481/AnsiballZ_systemd.py'
Jan 05 20:51:02 compute-0 sudo[144952]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:02 compute-0 python3.9[144954]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:51:02 compute-0 sudo[144952]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:03 compute-0 sudo[145107]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abwhnrfzajhanrfhzuuigtbnoyqdasph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646263.0250204-409-101774496965544/AnsiballZ_systemd.py'
Jan 05 20:51:03 compute-0 sudo[145107]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:03 compute-0 python3.9[145109]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-ro.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:51:04 compute-0 sudo[145107]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:05 compute-0 sudo[145262]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-loiazlqvqhfxhqpknmttvdbmvhkdsczy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646265.1129405-409-202184309560609/AnsiballZ_systemd.py'
Jan 05 20:51:05 compute-0 sudo[145262]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:05 compute-0 python3.9[145264]: ansible-ansible.builtin.systemd Invoked with enabled=True masked=False name=virtsecretd-admin.socket daemon_reload=False daemon_reexec=False scope=system no_block=False state=None force=None
Jan 05 20:51:06 compute-0 sudo[145262]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:06 compute-0 sudo[145417]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-srbzuzmdqwrptkhiyoseazqcqicbdhtk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646266.6106384-511-42674380960316/AnsiballZ_file.py'
Jan 05 20:51:06 compute-0 sudo[145417]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:07 compute-0 python3.9[145419]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/etc/tmpfiles.d/ setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:51:07 compute-0 sudo[145417]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:07 compute-0 sudo[145569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rwkxdrpdqkntpijrcwnyajvqgppxlelp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646267.3958201-511-65845266063684/AnsiballZ_file.py'
Jan 05 20:51:07 compute-0 sudo[145569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:07 compute-0 python3.9[145571]: ansible-ansible.builtin.file Invoked with group=root owner=root path=/var/lib/edpm-config/firewall setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:51:07 compute-0 sudo[145569]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:08 compute-0 sudo[145731]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvqfoznjbsnaiocbjwocunhkxkijmmbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646268.1512322-511-271254525062666/AnsiballZ_file.py'
Jan 05 20:51:08 compute-0 sudo[145731]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:08 compute-0 podman[145695]: 2026-01-05 20:51:08.686476854 +0000 UTC m=+0.170619702 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 05 20:51:08 compute-0 python3.9[145742]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:51:08 compute-0 sudo[145731]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:09 compute-0 sudo[145901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-befohxwdjhstxuityfyffiojueifuqlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646269.0630858-511-64544204275375/AnsiballZ_file.py'
Jan 05 20:51:09 compute-0 sudo[145901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:09 compute-0 python3.9[145903]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/libvirt/private setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:51:09 compute-0 sudo[145901]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:10 compute-0 sudo[146053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iggdjokmosbhsytlhkpfnrndeqnshvby ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646269.893243-511-194759396098918/AnsiballZ_file.py'
Jan 05 20:51:10 compute-0 sudo[146053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:10 compute-0 python3.9[146055]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/pki/CA setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:51:10 compute-0 sudo[146053]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:11 compute-0 sudo[146205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzixxophunkmdwhppmbgncyecmmqwizo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646270.8702583-511-181258458418053/AnsiballZ_file.py'
Jan 05 20:51:11 compute-0 sudo[146205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:11 compute-0 python3.9[146207]: ansible-ansible.builtin.file Invoked with group=qemu owner=root path=/etc/pki/qemu setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:51:11 compute-0 sudo[146205]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:12 compute-0 sudo[146357]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txihfpmfuomtqhecdwgwlfweerphhdck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646271.7786565-554-18702308646147/AnsiballZ_stat.py'
Jan 05 20:51:12 compute-0 sudo[146357]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:12 compute-0 python3.9[146359]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtlogd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:12 compute-0 sudo[146357]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:13 compute-0 sudo[146482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-axmpqjccvecuzlmsqybhbxxydaypkclq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646271.7786565-554-18702308646147/AnsiballZ_copy.py'
Jan 05 20:51:13 compute-0 sudo[146482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:13 compute-0 python3.9[146484]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtlogd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767646271.7786565-554-18702308646147/.source.conf follow=False _original_basename=virtlogd.conf checksum=d7a72ae92c2c205983b029473e05a6aa4c58ec24 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:13 compute-0 sudo[146482]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:14 compute-0 sudo[146634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fbzrkogpyfhootmdipbzxcdptsqsxogn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646273.846826-554-129475635045309/AnsiballZ_stat.py'
Jan 05 20:51:14 compute-0 sudo[146634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:14 compute-0 python3.9[146636]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtnodedevd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:14 compute-0 sudo[146634]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:14 compute-0 sudo[146759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vieveyfssomjqisvnznqjyjeiwitxgvw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646273.846826-554-129475635045309/AnsiballZ_copy.py'
Jan 05 20:51:14 compute-0 sudo[146759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:15 compute-0 python3.9[146761]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtnodedevd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767646273.846826-554-129475635045309/.source.conf follow=False _original_basename=virtnodedevd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:15 compute-0 sudo[146759]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:15 compute-0 sudo[146911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vvhmbzbapnvvrypshoxmndepusqmfzfm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646275.4058132-554-144533549726495/AnsiballZ_stat.py'
Jan 05 20:51:15 compute-0 sudo[146911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:16 compute-0 python3.9[146913]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtproxyd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:16 compute-0 sudo[146911]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:16 compute-0 sudo[147047]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ykunfnqhjkpdnriduwctftljcmbdfzxf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646275.4058132-554-144533549726495/AnsiballZ_copy.py'
Jan 05 20:51:16 compute-0 sudo[147047]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:16 compute-0 podman[147010]: 2026-01-05 20:51:16.587442099 +0000 UTC m=+0.085317666 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 20:51:16 compute-0 python3.9[147053]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtproxyd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767646275.4058132-554-144533549726495/.source.conf follow=False _original_basename=virtproxyd.conf checksum=28bc484b7c9988e03de49d4fcc0a088ea975f716 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:16 compute-0 sudo[147047]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:17 compute-0 sudo[147205]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhyhqhzinmxidfshbfetxnqqjzfhcfyv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646277.0098214-554-154163494703093/AnsiballZ_stat.py'
Jan 05 20:51:17 compute-0 sudo[147205]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:17 compute-0 python3.9[147207]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtqemud.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:17 compute-0 sudo[147205]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:18 compute-0 sudo[147330]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jiughbwczytersvrfnkcaunhoiujmlgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646277.0098214-554-154163494703093/AnsiballZ_copy.py'
Jan 05 20:51:18 compute-0 sudo[147330]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:18 compute-0 python3.9[147332]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtqemud.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767646277.0098214-554-154163494703093/.source.conf follow=False _original_basename=virtqemud.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:18 compute-0 sudo[147330]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:19 compute-0 sudo[147482]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-odbtalmovyelkorpesviznjdvmitzygx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646278.566584-554-4090194118839/AnsiballZ_stat.py'
Jan 05 20:51:19 compute-0 sudo[147482]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:19 compute-0 python3.9[147484]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/qemu.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:19 compute-0 sudo[147482]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:19 compute-0 sudo[147607]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-laqoczencitcftgqafkxjqwdpevsfjqx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646278.566584-554-4090194118839/AnsiballZ_copy.py'
Jan 05 20:51:19 compute-0 sudo[147607]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:20 compute-0 python3.9[147609]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/qemu.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767646278.566584-554-4090194118839/.source.conf follow=False _original_basename=qemu.conf.j2 checksum=c44de21af13c90603565570f09ff60c6a41ed8df backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:20 compute-0 sudo[147607]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:20 compute-0 sudo[147759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skrzpmaeooxczkkdzxbqeszokxiihobd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646280.2434094-554-237855526626253/AnsiballZ_stat.py'
Jan 05 20:51:20 compute-0 sudo[147759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:20 compute-0 python3.9[147761]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/virtsecretd.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:21 compute-0 sudo[147759]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:21 compute-0 sudo[147884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfcivnxxvbwiqpbcjzeenguvnmzbjkvz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646280.2434094-554-237855526626253/AnsiballZ_copy.py'
Jan 05 20:51:21 compute-0 sudo[147884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:21 compute-0 python3.9[147886]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/virtsecretd.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767646280.2434094-554-237855526626253/.source.conf follow=False _original_basename=virtsecretd.conf checksum=7a604468adb2868f1ab6ebd0fd4622286e6373e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:21 compute-0 sudo[147884]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:22 compute-0 sudo[148036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nkvkzouhahlmsyohycxraajpokjedzio ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646281.981763-554-63447439713434/AnsiballZ_stat.py'
Jan 05 20:51:22 compute-0 sudo[148036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:22 compute-0 python3.9[148038]: ansible-ansible.legacy.stat Invoked with path=/etc/libvirt/auth.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:22 compute-0 sudo[148036]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:23 compute-0 sudo[148159]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lzupmwxeepvqbmghlbnmxqfdqmzmrvhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646281.981763-554-63447439713434/AnsiballZ_copy.py'
Jan 05 20:51:23 compute-0 sudo[148159]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:23 compute-0 python3.9[148161]: ansible-ansible.legacy.copy Invoked with dest=/etc/libvirt/auth.conf group=libvirt mode=0600 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767646281.981763-554-63447439713434/.source.conf follow=False _original_basename=auth.conf checksum=a94cd818c374cec2c8425b70d2e0e2f41b743ae4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:23 compute-0 sudo[148159]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:23 compute-0 sudo[148311]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vapgbxvsmrmkrrxzbifqxuvnzdyisfyb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646283.5048807-554-194652026520179/AnsiballZ_stat.py'
Jan 05 20:51:23 compute-0 sudo[148311]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:24 compute-0 python3.9[148313]: ansible-ansible.legacy.stat Invoked with path=/etc/sasl2/libvirt.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:24 compute-0 sudo[148311]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:24 compute-0 sudo[148436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vknkhyyfuwjcxrnxjfarosvnwpkqgfpa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646283.5048807-554-194652026520179/AnsiballZ_copy.py'
Jan 05 20:51:24 compute-0 sudo[148436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:25 compute-0 python3.9[148438]: ansible-ansible.legacy.copy Invoked with dest=/etc/sasl2/libvirt.conf group=libvirt mode=0640 owner=libvirt src=/home/zuul/.ansible/tmp/ansible-tmp-1767646283.5048807-554-194652026520179/.source.conf follow=False _original_basename=sasl_libvirt.conf checksum=652e4d404bf79253d06956b8e9847c9364979d4a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:25 compute-0 sudo[148436]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:25 compute-0 sudo[148588]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zwlcxhxtslaeflajtbnbbyfzfqtvsubw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646285.3471687-667-141176747864608/AnsiballZ_command.py'
Jan 05 20:51:25 compute-0 sudo[148588]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:25 compute-0 python3.9[148590]: ansible-ansible.legacy.command Invoked with cmd=saslpasswd2 -f /etc/libvirt/passwd.db -p -a libvirt -u openstack migration stdin=12345678 _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None
Jan 05 20:51:26 compute-0 sudo[148588]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:26 compute-0 sudo[148741]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oncohagyenuuolxnomveecvvqrjiencg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646286.3133817-676-248436759380075/AnsiballZ_file.py'
Jan 05 20:51:26 compute-0 sudo[148741]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:26 compute-0 python3.9[148743]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:26 compute-0 sudo[148741]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:27 compute-0 sudo[148893]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcrwfdauieoedbfqscedmyqjduqattrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646287.1270993-676-225237757881467/AnsiballZ_file.py'
Jan 05 20:51:27 compute-0 sudo[148893]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:27 compute-0 python3.9[148895]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtlogd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:27 compute-0 sudo[148893]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:28 compute-0 sudo[149045]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ujzumiqvapbgpyhjnxxpgjuxzjdrsedr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646287.9489727-676-138707123701678/AnsiballZ_file.py'
Jan 05 20:51:28 compute-0 sudo[149045]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:28 compute-0 python3.9[149047]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:28 compute-0 sudo[149045]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:29 compute-0 sudo[149197]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iluivplaywulvlgjehjpzfbncciybqll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646288.8739748-676-51959075718449/AnsiballZ_file.py'
Jan 05 20:51:29 compute-0 sudo[149197]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:29 compute-0 python3.9[149199]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:29 compute-0 sudo[149197]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:30 compute-0 sudo[149349]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-thjxtgcrwddmfgccbnvibhrbdmqyaohe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646289.628999-676-53299626205999/AnsiballZ_file.py'
Jan 05 20:51:30 compute-0 sudo[149349]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:30 compute-0 python3.9[149351]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtnodedevd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:30 compute-0 sudo[149349]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:30 compute-0 sudo[149501]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-akkziecfefveakpwfurmsarlzyevbcae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646290.5128179-676-63999100414836/AnsiballZ_file.py'
Jan 05 20:51:30 compute-0 sudo[149501]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:31 compute-0 python3.9[149503]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:31 compute-0 sudo[149501]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:31 compute-0 sudo[149653]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfqaxjsjzzduspqtqixvbeqtzimetvvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646291.3534377-676-10612349596956/AnsiballZ_file.py'
Jan 05 20:51:31 compute-0 sudo[149653]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:31 compute-0 python3.9[149655]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:31 compute-0 sudo[149653]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:32 compute-0 sudo[149805]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrsnhinzhepnoegpedzuqwfgqaqawrmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646292.1271102-676-57245998434404/AnsiballZ_file.py'
Jan 05 20:51:32 compute-0 sudo[149805]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:32 compute-0 python3.9[149807]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtproxyd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:32 compute-0 sudo[149805]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:33 compute-0 sudo[149957]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qedbxsjdrahkvhcfhtfsrzolcfispryl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646292.9160843-676-7110059128114/AnsiballZ_file.py'
Jan 05 20:51:33 compute-0 sudo[149957]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:33 compute-0 python3.9[149959]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:33 compute-0 sudo[149957]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:34 compute-0 sudo[150109]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqostajdvvfengvucykchqmcrowqrfwv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646293.846863-676-263465132055039/AnsiballZ_file.py'
Jan 05 20:51:34 compute-0 sudo[150109]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:34 compute-0 python3.9[150111]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:34 compute-0 sudo[150109]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:35 compute-0 sudo[150261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oqqzfkkuxgbkhdhyrujzlpycwakfkwxu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646294.6570296-676-133473160933595/AnsiballZ_file.py'
Jan 05 20:51:35 compute-0 sudo[150261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:35 compute-0 python3.9[150263]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtqemud-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:35 compute-0 sudo[150261]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:35 compute-0 sudo[150413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zptvgawyoxjzkkknceemibadiyjrrnvx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646295.4543643-676-158329433405854/AnsiballZ_file.py'
Jan 05 20:51:35 compute-0 sudo[150413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:36 compute-0 python3.9[150415]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:36 compute-0 sudo[150413]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:36 compute-0 sudo[150565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ovuxlrjdujhvlzrrhncgrrsvkkfgaums ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646296.356886-676-278694919833392/AnsiballZ_file.py'
Jan 05 20:51:36 compute-0 sudo[150565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:36 compute-0 python3.9[150567]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-ro.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:36 compute-0 sudo[150565]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:37 compute-0 sudo[150717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnuyhvpkgafkcaxnvjgloncrahtuwqmq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646297.1866095-676-262430196334468/AnsiballZ_file.py'
Jan 05 20:51:37 compute-0 sudo[150717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:37 compute-0 python3.9[150719]: ansible-ansible.builtin.file Invoked with group=root mode=0755 owner=root path=/etc/systemd/system/virtsecretd-admin.socket.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:37 compute-0 sudo[150717]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:38 compute-0 sudo[150869]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onifxhnrdrmuzxdppjkgqqejvqvvdmle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646298.1964924-775-7611681419920/AnsiballZ_stat.py'
Jan 05 20:51:38 compute-0 sudo[150869]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:38 compute-0 python3.9[150871]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:38 compute-0 sudo[150869]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:39 compute-0 sudo[151003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhivsamrsjqgcfiurbvydandejrctfzl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646298.1964924-775-7611681419920/AnsiballZ_copy.py'
Jan 05 20:51:39 compute-0 sudo[151003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:39 compute-0 podman[150966]: 2026-01-05 20:51:39.396735309 +0000 UTC m=+0.129897245 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 05 20:51:39 compute-0 python3.9[151011]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646298.1964924-775-7611681419920/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:39 compute-0 sudo[151003]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:40 compute-0 sudo[151170]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gqlnjjysznylagvaemshlxvliryrtvjb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646299.7919714-775-83114764064020/AnsiballZ_stat.py'
Jan 05 20:51:40 compute-0 sudo[151170]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:40 compute-0 python3.9[151172]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtlogd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:40 compute-0 sudo[151170]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:40 compute-0 sudo[151293]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wgfwcunexmgqebugkjtioqdvkhhlxvcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646299.7919714-775-83114764064020/AnsiballZ_copy.py'
Jan 05 20:51:40 compute-0 sudo[151293]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:41 compute-0 python3.9[151295]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtlogd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646299.7919714-775-83114764064020/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:41 compute-0 sudo[151293]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:41 compute-0 sudo[151445]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cgjprsnguvybohocnaiwvwpxjskuabdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646301.3420908-775-132397392108913/AnsiballZ_stat.py'
Jan 05 20:51:41 compute-0 sudo[151445]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:41 compute-0 python3.9[151447]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:41 compute-0 sudo[151445]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:42 compute-0 sudo[151568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ezobwpiblqldoeddwtztmapkpirznxnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646301.3420908-775-132397392108913/AnsiballZ_copy.py'
Jan 05 20:51:42 compute-0 sudo[151568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:42 compute-0 python3.9[151570]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646301.3420908-775-132397392108913/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:42 compute-0 sudo[151568]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:51:42.817 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:51:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:51:42.818 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:51:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:51:42.818 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:51:43 compute-0 sudo[151720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffrfsdexwykycgwjqnkyjggjjmbvyvok ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646302.9532056-775-269374682585042/AnsiballZ_stat.py'
Jan 05 20:51:43 compute-0 sudo[151720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:43 compute-0 python3.9[151722]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:43 compute-0 sudo[151720]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:44 compute-0 sudo[151843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jofteevjgxitopkmdbnjypfrkakluqqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646302.9532056-775-269374682585042/AnsiballZ_copy.py'
Jan 05 20:51:44 compute-0 sudo[151843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:44 compute-0 python3.9[151845]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646302.9532056-775-269374682585042/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:44 compute-0 sudo[151843]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:44 compute-0 sudo[151995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auqdwjvwaadoxjmmzivsrrbircllggas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646304.5185149-775-142037141128309/AnsiballZ_stat.py'
Jan 05 20:51:44 compute-0 sudo[151995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:45 compute-0 python3.9[151997]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:45 compute-0 sudo[151995]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:45 compute-0 sudo[152118]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekmfxusphhjvialfydktjvwmppfamntg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646304.5185149-775-142037141128309/AnsiballZ_copy.py'
Jan 05 20:51:45 compute-0 sudo[152118]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:45 compute-0 python3.9[152120]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtnodedevd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646304.5185149-775-142037141128309/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:45 compute-0 sudo[152118]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:46 compute-0 sudo[152270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gccapaaflbcxgizsliqzlsctcujvvguu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646306.0925837-775-187864104586634/AnsiballZ_stat.py'
Jan 05 20:51:46 compute-0 sudo[152270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:46 compute-0 python3.9[152272]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:46 compute-0 sudo[152270]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:46 compute-0 podman[152273]: 2026-01-05 20:51:46.73275904 +0000 UTC m=+0.082243026 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 20:51:47 compute-0 sudo[152413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkevsefasqqaqkoiaterajlvqjpoiomn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646306.0925837-775-187864104586634/AnsiballZ_copy.py'
Jan 05 20:51:47 compute-0 sudo[152413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:47 compute-0 python3.9[152415]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646306.0925837-775-187864104586634/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:47 compute-0 sudo[152413]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:48 compute-0 sudo[152565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-icqzspiiqpthvkubrttgjdjojlyilkle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646307.7886245-775-8249264246715/AnsiballZ_stat.py'
Jan 05 20:51:48 compute-0 sudo[152565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:48 compute-0 python3.9[152567]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:48 compute-0 sudo[152565]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:48 compute-0 sudo[152688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmacchkzrxbhupxvenwgiztdhzzgkcor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646307.7886245-775-8249264246715/AnsiballZ_copy.py'
Jan 05 20:51:48 compute-0 sudo[152688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:49 compute-0 python3.9[152690]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646307.7886245-775-8249264246715/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:49 compute-0 sudo[152688]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:49 compute-0 sudo[152840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhqflfxlwpjnltrbmqeeqmsawroktwos ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646309.3778205-775-113124683792790/AnsiballZ_stat.py'
Jan 05 20:51:49 compute-0 sudo[152840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:49 compute-0 python3.9[152842]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:49 compute-0 sudo[152840]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:50 compute-0 sudo[152963]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-opomnnrllwzlfezmfovearyzlhkhzlkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646309.3778205-775-113124683792790/AnsiballZ_copy.py'
Jan 05 20:51:50 compute-0 sudo[152963]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:50 compute-0 python3.9[152965]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtproxyd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646309.3778205-775-113124683792790/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:50 compute-0 sudo[152963]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:51 compute-0 sudo[153115]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lswmqsmokdgvwkrnegxhghenensnfqgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646310.8812194-775-155018318044462/AnsiballZ_stat.py'
Jan 05 20:51:51 compute-0 sudo[153115]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:51 compute-0 python3.9[153117]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:51 compute-0 sudo[153115]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:52 compute-0 sudo[153238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kwixymtpcrbazagfyoaaaljvzgjwyiun ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646310.8812194-775-155018318044462/AnsiballZ_copy.py'
Jan 05 20:51:52 compute-0 sudo[153238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:52 compute-0 python3.9[153240]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646310.8812194-775-155018318044462/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:52 compute-0 sudo[153238]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:53 compute-0 sudo[153390]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ufysygcbxvenikzioxqyvdshihqjtjoh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646312.6075656-775-96445777841376/AnsiballZ_stat.py'
Jan 05 20:51:53 compute-0 sudo[153390]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:53 compute-0 python3.9[153392]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:53 compute-0 sudo[153390]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:53 compute-0 sudo[153513]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nvpgriwpustteawpjdxtqizmefzdbxkl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646312.6075656-775-96445777841376/AnsiballZ_copy.py'
Jan 05 20:51:53 compute-0 sudo[153513]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:54 compute-0 python3.9[153515]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646312.6075656-775-96445777841376/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:54 compute-0 sudo[153513]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:54 compute-0 sudo[153665]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bjotsucbqxncmzeotccyksjvccqanrwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646314.3260818-775-143225137071372/AnsiballZ_stat.py'
Jan 05 20:51:54 compute-0 sudo[153665]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:54 compute-0 python3.9[153667]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtqemud-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:54 compute-0 sudo[153665]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:55 compute-0 sudo[153788]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjulvtldwlwgbakxbhbjltdbslczzbti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646314.3260818-775-143225137071372/AnsiballZ_copy.py'
Jan 05 20:51:55 compute-0 sudo[153788]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:55 compute-0 python3.9[153790]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtqemud-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646314.3260818-775-143225137071372/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:55 compute-0 sudo[153788]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:56 compute-0 sudo[153940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sizpbvigjzzeljyrpwsajsouiotuqviw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646316.062186-775-256192722733815/AnsiballZ_stat.py'
Jan 05 20:51:56 compute-0 sudo[153940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:56 compute-0 python3.9[153942]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:56 compute-0 sudo[153940]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:57 compute-0 sudo[154063]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ahazwatzmncxzqddzopdmpvspazzwfia ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646316.062186-775-256192722733815/AnsiballZ_copy.py'
Jan 05 20:51:57 compute-0 sudo[154063]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:57 compute-0 python3.9[154065]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646316.062186-775-256192722733815/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:57 compute-0 sudo[154063]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:58 compute-0 sudo[154215]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mhiscpytzbezzdvjfashfkhegrdvqhbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646317.6761575-775-5122594115378/AnsiballZ_stat.py'
Jan 05 20:51:58 compute-0 sudo[154215]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:58 compute-0 python3.9[154217]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:58 compute-0 sudo[154215]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:58 compute-0 sudo[154338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dmopavpwfabfaadvfgvalffgooffxcrx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646317.6761575-775-5122594115378/AnsiballZ_copy.py'
Jan 05 20:51:58 compute-0 sudo[154338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:59 compute-0 python3.9[154340]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-ro.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646317.6761575-775-5122594115378/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:51:59 compute-0 sudo[154338]: pam_unix(sudo:session): session closed for user root
Jan 05 20:51:59 compute-0 sudo[154490]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mskezfcioiaunhcrmgcbqmmcndrmxpel ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646319.2131352-775-53015927443855/AnsiballZ_stat.py'
Jan 05 20:51:59 compute-0 sudo[154490]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:51:59 compute-0 python3.9[154492]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:51:59 compute-0 sudo[154490]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:00 compute-0 sudo[154613]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mrvlonpjyirvsrgvcfdsqmdzouuhmeiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646319.2131352-775-53015927443855/AnsiballZ_copy.py'
Jan 05 20:52:00 compute-0 sudo[154613]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:00 compute-0 python3.9[154615]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virtsecretd-admin.socket.d/override.conf group=root mode=0644 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646319.2131352-775-53015927443855/.source.conf follow=False _original_basename=libvirt-socket.unit.j2 checksum=0bad41f409b4ee7e780a2a59dc18f5c84ed99826 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:00 compute-0 sudo[154613]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:01 compute-0 python3.9[154765]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail
                                             ls -lRZ /run/libvirt | grep -E ':container_\S+_t'
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:52:02 compute-0 sudo[154918]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ehuoprlgypouxvqjwxaiphiumnxwuhgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646321.7664607-981-225822025345387/AnsiballZ_seboolean.py'
Jan 05 20:52:02 compute-0 sudo[154918]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:02 compute-0 python3.9[154920]: ansible-ansible.posix.seboolean Invoked with name=os_enable_vtpm persistent=True state=True ignore_selinux_state=False
Jan 05 20:52:03 compute-0 sudo[154918]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:04 compute-0 sudo[155074]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbbyouixrxqwieolgiajmlkvfrkzkosj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646324.0793407-989-280708764914781/AnsiballZ_copy.py'
Jan 05 20:52:04 compute-0 dbus-broker-launch[776]: avc:  op=load_policy lsm=selinux seqno=13 res=1
Jan 05 20:52:04 compute-0 sudo[155074]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:04 compute-0 python3.9[155076]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/servercert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:04 compute-0 sudo[155074]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:05 compute-0 sudo[155226]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sjikreacufopxpmrewbywieyxyqskvgd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646324.894434-989-159248861241457/AnsiballZ_copy.py'
Jan 05 20:52:05 compute-0 sudo[155226]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:05 compute-0 python3.9[155228]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/serverkey.pem group=root mode=0600 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:05 compute-0 sudo[155226]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:06 compute-0 sudo[155378]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eemaqhnfscetcowjyjswdexiqszfkhpj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646325.7298002-989-176745876560060/AnsiballZ_copy.py'
Jan 05 20:52:06 compute-0 sudo[155378]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:06 compute-0 python3.9[155380]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/clientcert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:06 compute-0 sudo[155378]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:06 compute-0 sudo[155530]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mgcjjmfmrwjfiuhateltvngahyytoskz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646326.5836995-989-78404531723102/AnsiballZ_copy.py'
Jan 05 20:52:06 compute-0 sudo[155530]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:07 compute-0 python3.9[155532]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/libvirt/private/clientkey.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:07 compute-0 sudo[155530]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:07 compute-0 sudo[155682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-imbuigrpzuihefjobojvchujxsgtgzrv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646327.3563461-989-211142766823775/AnsiballZ_copy.py'
Jan 05 20:52:07 compute-0 sudo[155682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:07 compute-0 python3.9[155684]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/CA/cacert.pem group=root mode=0644 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:08 compute-0 sudo[155682]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:08 compute-0 sudo[155834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsbtzjhotbkhfmwbybtnoqfuokdmpirk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646328.2681484-1025-219474456960164/AnsiballZ_copy.py'
Jan 05 20:52:08 compute-0 sudo[155834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:09 compute-0 python3.9[155836]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:09 compute-0 sudo[155834]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:09 compute-0 sudo[156003]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jgcoixjqgvkzulchrjisfykyptrpioov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646329.3062353-1025-139373557186528/AnsiballZ_copy.py'
Jan 05 20:52:09 compute-0 sudo[156003]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:09 compute-0 podman[155960]: 2026-01-05 20:52:09.827801487 +0000 UTC m=+0.166514864 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 20:52:09 compute-0 python3.9[156009]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/server-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:10 compute-0 sudo[156003]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:10 compute-0 sudo[156164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gctxtjwffxchdaxchmrqgqkwwzkqvfjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646330.230719-1025-77512900155525/AnsiballZ_copy.py'
Jan 05 20:52:10 compute-0 sudo[156164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:10 compute-0 python3.9[156166]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:10 compute-0 sudo[156164]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:11 compute-0 sudo[156316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgwljxyirztshegidojozxfpkmxkxtzm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646331.1189976-1025-164387361523284/AnsiballZ_copy.py'
Jan 05 20:52:11 compute-0 sudo[156316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:11 compute-0 python3.9[156318]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/client-key.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/tls.key backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:11 compute-0 sudo[156316]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:12 compute-0 sudo[156468]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qaqeyekoahcrddvvezufacfrfdooksfu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646331.9326735-1025-168050060279913/AnsiballZ_copy.py'
Jan 05 20:52:12 compute-0 sudo[156468]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:12 compute-0 python3.9[156470]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/qemu/ca-cert.pem group=qemu mode=0640 owner=root remote_src=True src=/var/lib/openstack/certs/libvirt/default/ca.crt backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:12 compute-0 sudo[156468]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:13 compute-0 sudo[156620]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gbmshcboupsfrhzfnqfyknjupbydmnrr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646332.822225-1061-175488915547193/AnsiballZ_systemd.py'
Jan 05 20:52:13 compute-0 sudo[156620]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:13 compute-0 python3.9[156622]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtlogd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:52:13 compute-0 systemd[1]: Reloading.
Jan 05 20:52:13 compute-0 systemd-rc-local-generator[156648]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:52:13 compute-0 systemd-sysv-generator[156651]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:52:14 compute-0 systemd[1]: Starting libvirt logging daemon socket...
Jan 05 20:52:14 compute-0 systemd[1]: Listening on libvirt logging daemon socket.
Jan 05 20:52:14 compute-0 systemd[1]: Starting libvirt logging daemon admin socket...
Jan 05 20:52:14 compute-0 systemd[1]: Listening on libvirt logging daemon admin socket.
Jan 05 20:52:14 compute-0 systemd[1]: Starting libvirt logging daemon...
Jan 05 20:52:14 compute-0 systemd[1]: Started libvirt logging daemon.
Jan 05 20:52:14 compute-0 sudo[156620]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:14 compute-0 sudo[156813]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wizbutdfkixbgxmhnynxceqcoxrweyiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646334.407993-1061-80749708757247/AnsiballZ_systemd.py'
Jan 05 20:52:14 compute-0 sudo[156813]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:15 compute-0 python3.9[156815]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtnodedevd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:52:15 compute-0 systemd[1]: Reloading.
Jan 05 20:52:15 compute-0 systemd-rc-local-generator[156838]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:52:15 compute-0 systemd-sysv-generator[156843]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:52:15 compute-0 systemd[1]: Starting libvirt nodedev daemon socket...
Jan 05 20:52:15 compute-0 systemd[1]: Listening on libvirt nodedev daemon socket.
Jan 05 20:52:15 compute-0 systemd[1]: Starting libvirt nodedev daemon admin socket...
Jan 05 20:52:15 compute-0 systemd[1]: Starting libvirt nodedev daemon read-only socket...
Jan 05 20:52:15 compute-0 systemd[1]: Listening on libvirt nodedev daemon admin socket.
Jan 05 20:52:15 compute-0 systemd[1]: Listening on libvirt nodedev daemon read-only socket.
Jan 05 20:52:15 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 05 20:52:15 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 05 20:52:15 compute-0 sudo[156813]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:16 compute-0 systemd[1]: Starting SETroubleshoot daemon for processing new SELinux denial logs...
Jan 05 20:52:16 compute-0 sudo[157030]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ebdziowvctohphydwfvhdzkovnlvhmum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646335.8236973-1061-10667198323498/AnsiballZ_systemd.py'
Jan 05 20:52:16 compute-0 sudo[157030]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:16 compute-0 systemd[1]: Started SETroubleshoot daemon for processing new SELinux denial logs.
Jan 05 20:52:16 compute-0 systemd[1]: Created slice Slice /system/dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged.
Jan 05 20:52:16 compute-0 python3.9[157032]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtproxyd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:52:16 compute-0 systemd[1]: Started dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service.
Jan 05 20:52:16 compute-0 systemd[1]: Reloading.
Jan 05 20:52:16 compute-0 systemd-rc-local-generator[157065]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:52:16 compute-0 systemd-sysv-generator[157070]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:52:16 compute-0 systemd[1]: Starting libvirt proxy daemon admin socket...
Jan 05 20:52:17 compute-0 systemd[1]: Starting libvirt proxy daemon read-only socket...
Jan 05 20:52:17 compute-0 systemd[1]: Listening on libvirt proxy daemon read-only socket.
Jan 05 20:52:17 compute-0 systemd[1]: Listening on libvirt proxy daemon admin socket.
Jan 05 20:52:17 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 05 20:52:17 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 05 20:52:17 compute-0 podman[157077]: 2026-01-05 20:52:17.083003999 +0000 UTC m=+0.104860702 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 05 20:52:17 compute-0 sudo[157030]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:17 compute-0 setroubleshoot[156979]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 10d56bba-b289-49d4-9569-45adfe83f4fb
Jan 05 20:52:17 compute-0 setroubleshoot[156979]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 05 20:52:17 compute-0 setroubleshoot[156979]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability. For complete SELinux messages run: sealert -l 10d56bba-b289-49d4-9569-45adfe83f4fb
Jan 05 20:52:17 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 20:52:17 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 20:52:17 compute-0 setroubleshoot[156979]: SELinux is preventing /usr/sbin/virtlogd from using the dac_read_search capability.
                                                  
                                                  *****  Plugin dac_override (91.4 confidence) suggests   **********************
                                                  
                                                  If you want to help identify if domain needs this access or you have a file with the wrong permissions on your system
                                                  Then turn on full auditing to get path information about the offending file and generate the error again.
                                                  Do
                                                  
                                                  Turn on full auditing
                                                  # auditctl -w /etc/shadow -p w
                                                  Try to recreate AVC. Then execute
                                                  # ausearch -m avc -ts recent
                                                  If you see PATH record check ownership/permissions on file, and fix it,
                                                  otherwise report as a bugzilla.
                                                  
                                                  *****  Plugin catchall (9.59 confidence) suggests   **************************
                                                  
                                                  If you believe that virtlogd should have the dac_read_search capability by default.
                                                  Then you should report this as a bug.
                                                  You can generate a local policy module to allow this access.
                                                  Do
                                                  allow this access for now by executing:
                                                  # ausearch -c 'virtlogd' --raw | audit2allow -M my-virtlogd
                                                  # semodule -X 300 -i my-virtlogd.pp
                                                  
Jan 05 20:52:17 compute-0 sudo[157270]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrizrbxsfwwgnhcwwvllbeiqcxtngtfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646337.3135083-1061-105947856727815/AnsiballZ_systemd.py'
Jan 05 20:52:17 compute-0 sudo[157270]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:18 compute-0 python3.9[157272]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtqemud.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:52:18 compute-0 systemd[1]: Reloading.
Jan 05 20:52:18 compute-0 systemd-rc-local-generator[157293]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:52:18 compute-0 systemd-sysv-generator[157302]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:52:18 compute-0 systemd[1]: Listening on libvirt locking daemon socket.
Jan 05 20:52:18 compute-0 systemd[1]: Starting libvirt QEMU daemon socket...
Jan 05 20:52:18 compute-0 systemd[1]: Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw).
Jan 05 20:52:18 compute-0 systemd[1]: Starting Virtual Machine and Container Registration Service...
Jan 05 20:52:18 compute-0 systemd[1]: Listening on libvirt QEMU daemon socket.
Jan 05 20:52:18 compute-0 systemd[1]: Starting libvirt QEMU daemon admin socket...
Jan 05 20:52:18 compute-0 systemd[1]: Starting libvirt QEMU daemon read-only socket...
Jan 05 20:52:18 compute-0 systemd[1]: Listening on libvirt QEMU daemon admin socket.
Jan 05 20:52:18 compute-0 systemd[1]: Listening on libvirt QEMU daemon read-only socket.
Jan 05 20:52:18 compute-0 systemd[1]: Started Virtual Machine and Container Registration Service.
Jan 05 20:52:18 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 05 20:52:18 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 05 20:52:18 compute-0 sudo[157270]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:19 compute-0 sudo[157485]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dgdzkwgbsegtvfmbzkquwwessnshqxrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646338.7618911-1061-19932319027651/AnsiballZ_systemd.py'
Jan 05 20:52:19 compute-0 sudo[157485]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:19 compute-0 python3.9[157487]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True name=virtsecretd.service state=restarted daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:52:19 compute-0 systemd[1]: Reloading.
Jan 05 20:52:19 compute-0 systemd-rc-local-generator[157515]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:52:19 compute-0 systemd-sysv-generator[157520]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:52:19 compute-0 systemd[1]: Starting libvirt secret daemon socket...
Jan 05 20:52:19 compute-0 systemd[1]: Listening on libvirt secret daemon socket.
Jan 05 20:52:19 compute-0 systemd[1]: Starting libvirt secret daemon admin socket...
Jan 05 20:52:19 compute-0 systemd[1]: Starting libvirt secret daemon read-only socket...
Jan 05 20:52:19 compute-0 systemd[1]: Listening on libvirt secret daemon read-only socket.
Jan 05 20:52:19 compute-0 systemd[1]: Listening on libvirt secret daemon admin socket.
Jan 05 20:52:19 compute-0 systemd[1]: Starting libvirt secret daemon...
Jan 05 20:52:19 compute-0 systemd[1]: Started libvirt secret daemon.
Jan 05 20:52:19 compute-0 sudo[157485]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:20 compute-0 sudo[157698]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vntexqficmfsnydcgispbshnfilvzgif ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646340.3657324-1098-79274146457639/AnsiballZ_file.py'
Jan 05 20:52:20 compute-0 sudo[157698]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:21 compute-0 python3.9[157700]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/openstack/config/ceph state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:21 compute-0 sudo[157698]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:21 compute-0 sudo[157850]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtbzxcbxpupwatylxrmpttthvuqymgoj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646341.2939503-1106-130348977001016/AnsiballZ_find.py'
Jan 05 20:52:21 compute-0 sudo[157850]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:21 compute-0 python3.9[157852]: ansible-ansible.builtin.find Invoked with paths=['/var/lib/openstack/config/ceph'] patterns=['*.conf'] read_whole_file=False file_type=file age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 05 20:52:21 compute-0 sudo[157850]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:22 compute-0 sudo[158002]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdohzkvbrjaihwvnefdztmfhnhpulwpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646342.4271626-1120-254251291522694/AnsiballZ_stat.py'
Jan 05 20:52:22 compute-0 sudo[158002]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:23 compute-0 python3.9[158004]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/libvirt.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:52:23 compute-0 sudo[158002]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:23 compute-0 sudo[158125]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eslhpsdukwzwpwbjlsprrgamylqlrzlw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646342.4271626-1120-254251291522694/AnsiballZ_copy.py'
Jan 05 20:52:23 compute-0 sudo[158125]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:23 compute-0 python3.9[158127]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/libvirt.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646342.4271626-1120-254251291522694/.source.yaml follow=False _original_basename=firewall.yaml.j2 checksum=5ca83b1310a74c5e48c4c3d4640e1cb8fdac1061 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:23 compute-0 sudo[158125]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:24 compute-0 sudo[158277]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pwjusokbacexdfvxykckhojqbjmduaxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646344.2286646-1136-192549346516002/AnsiballZ_file.py'
Jan 05 20:52:24 compute-0 sudo[158277]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:24 compute-0 python3.9[158279]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:24 compute-0 sudo[158277]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:25 compute-0 sudo[158429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cpeorrtpgdcbghcicluuimtwactyvfpc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646345.1213465-1144-186656780752860/AnsiballZ_stat.py'
Jan 05 20:52:25 compute-0 sudo[158429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:25 compute-0 python3.9[158431]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:52:25 compute-0 sudo[158429]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:26 compute-0 sudo[158507]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uarctrgddrfkrbgbidtjlmsjzxpsfpaz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646345.1213465-1144-186656780752860/AnsiballZ_file.py'
Jan 05 20:52:26 compute-0 sudo[158507]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:26 compute-0 python3.9[158509]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:26 compute-0 sudo[158507]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:26 compute-0 sudo[158659]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwqqmzvmhxlmnplbwvbmuyssxqjobxch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646346.643063-1156-243540457302314/AnsiballZ_stat.py'
Jan 05 20:52:26 compute-0 sudo[158659]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:27 compute-0 python3.9[158661]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:52:27 compute-0 sudo[158659]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:27 compute-0 sudo[158737]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tachbrmlmqqdrribwfcxwwqnpcuvnpvy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646346.643063-1156-243540457302314/AnsiballZ_file.py'
Jan 05 20:52:27 compute-0 sudo[158737]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:27 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Deactivated successfully.
Jan 05 20:52:27 compute-0 systemd[1]: dbus-:1.1-org.fedoraproject.SetroubleshootPrivileged@0.service: Consumed 1.080s CPU time.
Jan 05 20:52:27 compute-0 systemd[1]: setroubleshootd.service: Deactivated successfully.
Jan 05 20:52:27 compute-0 python3.9[158739]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.xzhqk_ni recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:27 compute-0 sudo[158737]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:28 compute-0 sudo[158889]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrxjvmwfgttwhrbzufuvpscoxjnurplz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646348.0898151-1168-190354450412502/AnsiballZ_stat.py'
Jan 05 20:52:28 compute-0 sudo[158889]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:28 compute-0 python3.9[158891]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:52:28 compute-0 sudo[158889]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:29 compute-0 sudo[158967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fjagxhmzblluydipuoibeblepkswzgty ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646348.0898151-1168-190354450412502/AnsiballZ_file.py'
Jan 05 20:52:29 compute-0 sudo[158967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:29 compute-0 python3.9[158969]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:29 compute-0 sudo[158967]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:29 compute-0 sudo[159119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dslhczbucaaxxavdpapisidypolhfxrb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646349.563332-1181-14218389838698/AnsiballZ_command.py'
Jan 05 20:52:29 compute-0 sudo[159119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:30 compute-0 python3.9[159121]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:52:30 compute-0 sudo[159119]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:31 compute-0 sudo[159272]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixjptjhmdeybtxgywbbxnqefamebrzmb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646350.4474921-1189-159006486107859/AnsiballZ_edpm_nftables_from_files.py'
Jan 05 20:52:31 compute-0 sudo[159272]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:31 compute-0 python3[159274]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 05 20:52:31 compute-0 sudo[159272]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:31 compute-0 sudo[159424]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xikhmfrtjltzjvrgyspsyusggshqcqdr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646351.562834-1197-230958399171137/AnsiballZ_stat.py'
Jan 05 20:52:31 compute-0 sudo[159424]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:32 compute-0 python3.9[159426]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:52:32 compute-0 sudo[159424]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:32 compute-0 sudo[159502]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hobnwvrbriuzholeziqghsxahjvgkonv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646351.562834-1197-230958399171137/AnsiballZ_file.py'
Jan 05 20:52:32 compute-0 sudo[159502]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:32 compute-0 python3.9[159504]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:32 compute-0 sudo[159502]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:33 compute-0 sudo[159654]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uarkgvnqaxqodoadazfzbxddexuwhmqv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646353.0345519-1209-249311185659875/AnsiballZ_stat.py'
Jan 05 20:52:33 compute-0 sudo[159654]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:33 compute-0 python3.9[159656]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:52:33 compute-0 sudo[159654]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:34 compute-0 sudo[159732]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ljzntqwuioufwrzzmhwvcxalsfjmlquz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646353.0345519-1209-249311185659875/AnsiballZ_file.py'
Jan 05 20:52:34 compute-0 sudo[159732]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:34 compute-0 python3.9[159734]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:34 compute-0 sudo[159732]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:34 compute-0 sudo[159884]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tmimaqxeiomcnkbqjwjxfakfiycqjqmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646354.5441027-1221-28992751310112/AnsiballZ_stat.py'
Jan 05 20:52:34 compute-0 sudo[159884]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:35 compute-0 python3.9[159886]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:52:35 compute-0 sudo[159884]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:35 compute-0 sudo[159962]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zycjqgvptkedlhsmfgovpvlzdjypfvoo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646354.5441027-1221-28992751310112/AnsiballZ_file.py'
Jan 05 20:52:35 compute-0 sudo[159962]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:35 compute-0 python3.9[159964]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:35 compute-0 sudo[159962]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:36 compute-0 sudo[160114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qohgggdhwzgrpqpvyofhjqezxyzuvdph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646355.9559252-1233-64747166429842/AnsiballZ_stat.py'
Jan 05 20:52:36 compute-0 sudo[160114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:36 compute-0 python3.9[160116]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:52:36 compute-0 sudo[160114]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:36 compute-0 sudo[160192]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjdcwwfpflybzcmzefojidwqxfvechec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646355.9559252-1233-64747166429842/AnsiballZ_file.py'
Jan 05 20:52:36 compute-0 sudo[160192]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:37 compute-0 python3.9[160194]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:37 compute-0 sudo[160192]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:37 compute-0 sudo[160344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zcfytfsqfbaujcxufawuxodqztcnqezm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646357.3397465-1245-49066977185398/AnsiballZ_stat.py'
Jan 05 20:52:37 compute-0 sudo[160344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:38 compute-0 python3.9[160346]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:52:38 compute-0 sudo[160344]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:38 compute-0 sudo[160469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvcfemttanlhdavywjcfualxqqmbofdq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646357.3397465-1245-49066977185398/AnsiballZ_copy.py'
Jan 05 20:52:38 compute-0 sudo[160469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:38 compute-0 python3.9[160471]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646357.3397465-1245-49066977185398/.source.nft follow=False _original_basename=ruleset.j2 checksum=8a12d4eb5149b6e500230381c1359a710881e9b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:38 compute-0 sudo[160469]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:39 compute-0 sudo[160621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hjidxkmbwihheewekscatygogskwwpkb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646359.2388961-1260-239509361887025/AnsiballZ_file.py'
Jan 05 20:52:39 compute-0 sudo[160621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:39 compute-0 python3.9[160623]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:39 compute-0 sudo[160621]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:40 compute-0 sudo[160783]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ppxhogfkscmlbxinnhlwlyljotklzuvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646360.123116-1268-254074841545564/AnsiballZ_command.py'
Jan 05 20:52:40 compute-0 sudo[160783]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:40 compute-0 podman[160747]: 2026-01-05 20:52:40.610776324 +0000 UTC m=+0.144726141 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 20:52:40 compute-0 python3.9[160792]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:52:40 compute-0 sudo[160783]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:41 compute-0 sudo[160954]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjizdkbuqouxjirttgwciqvhbaexaihv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646360.9580863-1276-99963336724608/AnsiballZ_blockinfile.py'
Jan 05 20:52:41 compute-0 sudo[160954]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:41 compute-0 python3.9[160956]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:41 compute-0 sudo[160954]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:42 compute-0 sudo[161106]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-alplgeuiiruthvftyxzoclaezborydwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646361.9935286-1285-221727006665303/AnsiballZ_command.py'
Jan 05 20:52:42 compute-0 sudo[161106]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:42 compute-0 python3.9[161108]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:52:42 compute-0 sudo[161106]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:52:42.819 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:52:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:52:42.820 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:52:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:52:42.820 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:52:43 compute-0 sudo[161259]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wphmsajcttggiupvgfjjaveuzszsokqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646362.8185349-1293-14564511480264/AnsiballZ_stat.py'
Jan 05 20:52:43 compute-0 sudo[161259]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:43 compute-0 python3.9[161261]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:52:43 compute-0 sudo[161259]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:44 compute-0 sudo[161413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lbbdobazpxrmfzdsvrkxqdgiejkiukkt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646363.717358-1301-183815650994317/AnsiballZ_command.py'
Jan 05 20:52:44 compute-0 sudo[161413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:44 compute-0 python3.9[161415]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:52:44 compute-0 sudo[161413]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:44 compute-0 sudo[161568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbiowjgepvsganicaybvknvbatgnqxhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646364.5610433-1309-67408544971091/AnsiballZ_file.py'
Jan 05 20:52:44 compute-0 sudo[161568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:45 compute-0 python3.9[161570]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:45 compute-0 sudo[161568]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:45 compute-0 sudo[161720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nrnzqjkozvonnsnluntyckkdzmzwrjuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646365.5021968-1317-100693647229562/AnsiballZ_stat.py'
Jan 05 20:52:45 compute-0 sudo[161720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:46 compute-0 python3.9[161722]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:52:46 compute-0 sudo[161720]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:46 compute-0 sudo[161843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hbtduszcvyhprgvjyywffhimzlrtxzxd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646365.5021968-1317-100693647229562/AnsiballZ_copy.py'
Jan 05 20:52:46 compute-0 sudo[161843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:46 compute-0 python3.9[161845]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646365.5021968-1317-100693647229562/.source.target follow=False _original_basename=edpm_libvirt.target checksum=13035a1aa0f414c677b14be9a5a363b6623d393c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:46 compute-0 sudo[161843]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:47 compute-0 sudo[162008]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mbegvtysctadovlxbueqmgvuedwkcksd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646367.2013683-1332-135320349298571/AnsiballZ_stat.py'
Jan 05 20:52:47 compute-0 sudo[162008]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:47 compute-0 podman[161969]: 2026-01-05 20:52:47.72351447 +0000 UTC m=+0.102066453 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3)
Jan 05 20:52:47 compute-0 python3.9[162015]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/edpm_libvirt_guests.service follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:52:47 compute-0 sudo[162008]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:48 compute-0 sudo[162136]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fsfcumsnectjnfglducxiuqnilhkjajr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646367.2013683-1332-135320349298571/AnsiballZ_copy.py'
Jan 05 20:52:48 compute-0 sudo[162136]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:48 compute-0 python3.9[162138]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/edpm_libvirt_guests.service mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646367.2013683-1332-135320349298571/.source.service follow=False _original_basename=edpm_libvirt_guests.service checksum=db83430a42fc2ccfd6ed8b56ebf04f3dff9cd0cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:48 compute-0 sudo[162136]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:49 compute-0 sudo[162288]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogvxbovghigxkikubncriwvdwzfoonxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646369.0425115-1347-184910992205555/AnsiballZ_stat.py'
Jan 05 20:52:49 compute-0 sudo[162288]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:49 compute-0 python3.9[162290]: ansible-ansible.legacy.stat Invoked with path=/etc/systemd/system/virt-guest-shutdown.target follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:52:49 compute-0 sudo[162288]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:50 compute-0 sudo[162411]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pgfelwkvkbsctuiawpiovftgeneerqnu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646369.0425115-1347-184910992205555/AnsiballZ_copy.py'
Jan 05 20:52:50 compute-0 sudo[162411]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:50 compute-0 python3.9[162413]: ansible-ansible.legacy.copy Invoked with dest=/etc/systemd/system/virt-guest-shutdown.target mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646369.0425115-1347-184910992205555/.source.target follow=False _original_basename=virt-guest-shutdown.target checksum=49ca149619c596cbba877418629d2cf8f7b0f5cf backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:52:50 compute-0 sudo[162411]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:51 compute-0 sudo[162563]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uwjdjfgrotifacrnygpxymzxgnzmogfr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646370.710582-1362-276709890804101/AnsiballZ_systemd.py'
Jan 05 20:52:51 compute-0 sudo[162563]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:51 compute-0 python3.9[162565]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt.target state=restarted daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:52:51 compute-0 systemd[1]: Reloading.
Jan 05 20:52:51 compute-0 systemd-rc-local-generator[162593]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:52:51 compute-0 systemd-sysv-generator[162596]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:52:51 compute-0 systemd[1]: Reached target edpm_libvirt.target.
Jan 05 20:52:51 compute-0 sudo[162563]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:52 compute-0 sudo[162754]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yccqbznikosvqyqqgghzyyxqmltaxjvh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646372.0434065-1370-194806445217636/AnsiballZ_systemd.py'
Jan 05 20:52:52 compute-0 sudo[162754]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:52:52 compute-0 python3.9[162756]: ansible-ansible.builtin.systemd Invoked with daemon_reload=True enabled=True name=edpm_libvirt_guests daemon_reexec=False scope=system no_block=False state=None force=None masked=None
Jan 05 20:52:52 compute-0 systemd[1]: Reloading.
Jan 05 20:52:52 compute-0 systemd-rc-local-generator[162783]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:52:52 compute-0 systemd-sysv-generator[162787]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:52:53 compute-0 systemd[1]: Reloading.
Jan 05 20:52:53 compute-0 systemd-rc-local-generator[162822]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:52:53 compute-0 systemd-sysv-generator[162826]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:52:53 compute-0 sudo[162754]: pam_unix(sudo:session): session closed for user root
Jan 05 20:52:54 compute-0 sshd-session[108239]: Connection closed by 192.168.122.30 port 40654
Jan 05 20:52:54 compute-0 sshd-session[108236]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:52:54 compute-0 systemd[1]: session-22.scope: Deactivated successfully.
Jan 05 20:52:54 compute-0 systemd[1]: session-22.scope: Consumed 4min 12.013s CPU time.
Jan 05 20:52:54 compute-0 systemd-logind[788]: Session 22 logged out. Waiting for processes to exit.
Jan 05 20:52:54 compute-0 systemd-logind[788]: Removed session 22.
Jan 05 20:52:59 compute-0 sshd-session[162853]: Accepted publickey for zuul from 192.168.122.30 port 38500 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:52:59 compute-0 systemd-logind[788]: New session 23 of user zuul.
Jan 05 20:52:59 compute-0 systemd[1]: Started Session 23 of User zuul.
Jan 05 20:52:59 compute-0 sshd-session[162853]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:53:00 compute-0 python3.9[163006]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:53:02 compute-0 python3.9[163160]: ansible-ansible.builtin.service_facts Invoked
Jan 05 20:53:02 compute-0 network[163177]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 05 20:53:02 compute-0 network[163178]: 'network-scripts' will be removed from distribution in near future.
Jan 05 20:53:02 compute-0 network[163179]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 05 20:53:08 compute-0 sudo[163448]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iuriqnpjzomsykcpowqxdxsjectutmov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646387.914836-47-220226521116934/AnsiballZ_setup.py'
Jan 05 20:53:08 compute-0 sudo[163448]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:08 compute-0 python3.9[163450]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 20:53:09 compute-0 sudo[163448]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:09 compute-0 sudo[163532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnvdphprasnxtqcnjojmwxechtufgfav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646387.914836-47-220226521116934/AnsiballZ_dnf.py'
Jan 05 20:53:09 compute-0 sudo[163532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:09 compute-0 python3.9[163534]: ansible-ansible.legacy.dnf Invoked with name=['iscsi-initiator-utils'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:53:11 compute-0 podman[163536]: 2026-01-05 20:53:11.818426087 +0000 UTC m=+0.157246015 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 20:53:15 compute-0 sudo[163532]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:15 compute-0 sudo[163711]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dejrdhmgfndhumxhoudqsprbmigdhbzh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646395.2089336-59-149928753876472/AnsiballZ_stat.py'
Jan 05 20:53:15 compute-0 sudo[163711]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:16 compute-0 python3.9[163713]: ansible-ansible.builtin.stat Invoked with path=/var/lib/config-data/puppet-generated/iscsid/etc/iscsi follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:53:16 compute-0 sudo[163711]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:17 compute-0 sudo[163863]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hmgrnhffzzrxvneiaoydttthozhgvqrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646396.407073-69-276251810624025/AnsiballZ_command.py'
Jan 05 20:53:17 compute-0 sudo[163863]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:17 compute-0 python3.9[163865]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/iscsi /var/lib/iscsi _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:53:17 compute-0 sudo[163863]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:18 compute-0 sudo[164029]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzkzrugrubqkktxcpblrkqhpgobjglgl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646397.6214662-79-54134489633205/AnsiballZ_stat.py'
Jan 05 20:53:18 compute-0 sudo[164029]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:18 compute-0 podman[163990]: 2026-01-05 20:53:18.110538823 +0000 UTC m=+0.121995021 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 05 20:53:18 compute-0 python3.9[164037]: ansible-ansible.builtin.stat Invoked with path=/etc/iscsi/.initiator_reset follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:53:18 compute-0 sudo[164029]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:18 compute-0 sudo[164188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdjsdhbjcrvwyxiuikjoephukjmkatli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646398.52149-87-164841421150077/AnsiballZ_command.py'
Jan 05 20:53:18 compute-0 sudo[164188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:19 compute-0 python3.9[164190]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/iscsi-iname _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:53:19 compute-0 sudo[164188]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:19 compute-0 sudo[164341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sefcpxcqsktkhgutsmzrchqxesumljvm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646399.3883646-95-173985906280650/AnsiballZ_stat.py'
Jan 05 20:53:19 compute-0 sudo[164341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:20 compute-0 python3.9[164343]: ansible-ansible.legacy.stat Invoked with path=/etc/iscsi/initiatorname.iscsi follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:53:20 compute-0 sudo[164341]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:20 compute-0 sudo[164464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-spqzfljgisrvwbqdomkwaohprmbyiycz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646399.3883646-95-173985906280650/AnsiballZ_copy.py'
Jan 05 20:53:20 compute-0 sudo[164464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:20 compute-0 python3.9[164466]: ansible-ansible.legacy.copy Invoked with dest=/etc/iscsi/initiatorname.iscsi mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646399.3883646-95-173985906280650/.source.iscsi _original_basename=.r12d9f1z follow=False checksum=c932cd9cf3fe4d948ca56ca8670389627c6e74f6 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:53:20 compute-0 sudo[164464]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:21 compute-0 sudo[164616]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mzxtjscftsunjacitmewnhxqwuovdzrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646401.154207-110-227227368060254/AnsiballZ_file.py'
Jan 05 20:53:21 compute-0 sudo[164616]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:22 compute-0 python3.9[164618]: ansible-ansible.builtin.file Invoked with mode=0600 path=/etc/iscsi/.initiator_reset state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:53:22 compute-0 sudo[164616]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:22 compute-0 sudo[164768]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lvpcopyagdnosodcarqmqfbqicenuhma ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646402.2552602-118-66149799771608/AnsiballZ_lineinfile.py'
Jan 05 20:53:22 compute-0 sudo[164768]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:23 compute-0 python3.9[164770]: ansible-ansible.builtin.lineinfile Invoked with insertafter=^#node.session.auth.chap.algs line=node.session.auth.chap_algs = SHA3-256,SHA256,SHA1,MD5 path=/etc/iscsi/iscsid.conf regexp=^node.session.auth.chap_algs state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:53:23 compute-0 sudo[164768]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:24 compute-0 sudo[164920]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkamkkskqnmjtgkzmkatkbcyeectiuxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646403.2926998-127-168168077831356/AnsiballZ_systemd_service.py'
Jan 05 20:53:24 compute-0 sudo[164920]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:24 compute-0 python3.9[164922]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:53:24 compute-0 systemd[1]: Listening on Open-iSCSI iscsid Socket.
Jan 05 20:53:24 compute-0 sudo[164920]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:25 compute-0 sudo[165076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ucogiogoitifqzvopcrsrquitrmhwwrc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646404.7144308-135-279338387676058/AnsiballZ_systemd_service.py'
Jan 05 20:53:25 compute-0 sudo[165076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:25 compute-0 python3.9[165078]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=iscsid state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:53:25 compute-0 systemd[1]: Reloading.
Jan 05 20:53:25 compute-0 systemd-rc-local-generator[165112]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:53:25 compute-0 systemd-sysv-generator[165115]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:53:25 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 05 20:53:25 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 05 20:53:25 compute-0 kernel: Loading iSCSI transport class v2.0-870.
Jan 05 20:53:25 compute-0 systemd[1]: Started Open-iSCSI.
Jan 05 20:53:25 compute-0 systemd[1]: Starting Logout off all iSCSI sessions on shutdown...
Jan 05 20:53:25 compute-0 systemd[1]: Finished Logout off all iSCSI sessions on shutdown.
Jan 05 20:53:25 compute-0 sudo[165076]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:27 compute-0 python3.9[165282]: ansible-ansible.builtin.service_facts Invoked
Jan 05 20:53:27 compute-0 network[165299]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 05 20:53:27 compute-0 network[165300]: 'network-scripts' will be removed from distribution in near future.
Jan 05 20:53:27 compute-0 network[165301]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 05 20:53:32 compute-0 sudo[165570]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sbpttyrktlqhhbhqbafijfcbtxludfph ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646411.7558436-158-57066924001045/AnsiballZ_dnf.py'
Jan 05 20:53:32 compute-0 sudo[165570]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:32 compute-0 python3.9[165572]: ansible-ansible.legacy.dnf Invoked with name=['device-mapper-multipath'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:53:35 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 20:53:35 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 05 20:53:35 compute-0 systemd[1]: Reloading.
Jan 05 20:53:35 compute-0 systemd-rc-local-generator[165614]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:53:35 compute-0 systemd-sysv-generator[165617]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:53:35 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 20:53:35 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 20:53:35 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 20:53:35 compute-0 systemd[1]: run-r38297bf40f6042048a030d87a6a6a1db.service: Deactivated successfully.
Jan 05 20:53:35 compute-0 sudo[165570]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:36 compute-0 sudo[165886]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-csloaioqyocpmmwvlydfjqnwxeexxzki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646416.0955384-167-264823277611884/AnsiballZ_file.py'
Jan 05 20:53:36 compute-0 sudo[165886]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:36 compute-0 python3.9[165888]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 05 20:53:36 compute-0 sudo[165886]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:37 compute-0 sudo[166038]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viijnippbbslrnaobtycnyoqnhykwudr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646416.9643233-175-75217739935904/AnsiballZ_modprobe.py'
Jan 05 20:53:37 compute-0 sudo[166038]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:37 compute-0 python3.9[166040]: ansible-community.general.modprobe Invoked with name=dm-multipath state=present params= persistent=disabled
Jan 05 20:53:37 compute-0 sudo[166038]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:38 compute-0 sudo[166194]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecsjzmouvfhoteoorecyjrdoxcbfrvrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646418.0106308-183-15070395628000/AnsiballZ_stat.py'
Jan 05 20:53:38 compute-0 sudo[166194]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:38 compute-0 python3.9[166196]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/dm-multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:53:38 compute-0 sudo[166194]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:39 compute-0 sudo[166317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcowubrwjijpsvfjkmjkrzeehasedsko ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646418.0106308-183-15070395628000/AnsiballZ_copy.py'
Jan 05 20:53:39 compute-0 sudo[166317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:39 compute-0 python3.9[166319]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/dm-multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646418.0106308-183-15070395628000/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=065061c60917e4f67cecc70d12ce55e42f9d0b3f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:53:39 compute-0 sudo[166317]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:40 compute-0 sudo[166469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gtaydfanjcacifkiguqunsloxdxhsban ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646419.6589193-199-4417911339400/AnsiballZ_lineinfile.py'
Jan 05 20:53:40 compute-0 sudo[166469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:40 compute-0 python3.9[166471]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=dm-multipath  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:53:40 compute-0 sudo[166469]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:41 compute-0 sudo[166621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvyhmcurqxkvoqyfzicoofppjstbjond ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646420.5009341-207-78615091771738/AnsiballZ_systemd.py'
Jan 05 20:53:41 compute-0 sudo[166621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:41 compute-0 python3.9[166623]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:53:41 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 05 20:53:41 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 05 20:53:41 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 05 20:53:41 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 05 20:53:41 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 05 20:53:41 compute-0 sudo[166621]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:42 compute-0 sudo[166792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fspjurbzhgfnqxwvduwbswcdibcreeat ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646421.9282093-215-57577936183532/AnsiballZ_command.py'
Jan 05 20:53:42 compute-0 sudo[166792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:42 compute-0 podman[166751]: 2026-01-05 20:53:42.389382016 +0000 UTC m=+0.135908613 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 20:53:42 compute-0 python3.9[166798]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/sbin/restorecon -nvr /etc/multipath _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:53:42 compute-0 sudo[166792]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:53:42.820 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:53:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:53:42.820 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:53:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:53:42.820 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:53:43 compute-0 sudo[166956]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pxvzjqhczbfhzncgpqyvdkuqimuezulb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646422.8288746-225-258153207973457/AnsiballZ_stat.py'
Jan 05 20:53:43 compute-0 sudo[166956]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:43 compute-0 python3.9[166958]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:53:43 compute-0 sudo[166956]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:44 compute-0 sudo[167108]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gospmgnhbcgdxnzigmgrtzrlshgwuxqb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646423.7458076-234-231668240743401/AnsiballZ_stat.py'
Jan 05 20:53:44 compute-0 sudo[167108]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:44 compute-0 python3.9[167110]: ansible-ansible.legacy.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:53:44 compute-0 sudo[167108]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:44 compute-0 sudo[167231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dejaublucahtjpxhkndmqyaygsmiffxk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646423.7458076-234-231668240743401/AnsiballZ_copy.py'
Jan 05 20:53:44 compute-0 sudo[167231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:45 compute-0 python3.9[167233]: ansible-ansible.legacy.copy Invoked with dest=/etc/multipath.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646423.7458076-234-231668240743401/.source.conf _original_basename=multipath.conf follow=False checksum=bf02ab264d3d648048a81f3bacec8bc58db93162 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:53:45 compute-0 sudo[167231]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:45 compute-0 sudo[167383]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggqovfgxhlmxulipiurydadkwthtyefz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646425.383426-249-123267575566824/AnsiballZ_command.py'
Jan 05 20:53:45 compute-0 sudo[167383]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:46 compute-0 python3.9[167385]: ansible-ansible.legacy.command Invoked with _raw_params=grep -q '^blacklist\s*{' /etc/multipath.conf _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:53:46 compute-0 sudo[167383]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:46 compute-0 sudo[167536]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zpsydprukzehhnbrpbhknvjjrmcpqlhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646426.2845967-257-245975200677864/AnsiballZ_lineinfile.py'
Jan 05 20:53:46 compute-0 sudo[167536]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:46 compute-0 python3.9[167538]: ansible-ansible.builtin.lineinfile Invoked with line=blacklist { path=/etc/multipath.conf state=present encoding=utf-8 backrefs=False create=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:53:46 compute-0 sudo[167536]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:47 compute-0 sudo[167688]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bsawakcjrtvgsbgjfeknrfjyfdotwgak ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646427.1830354-265-272037742478297/AnsiballZ_replace.py'
Jan 05 20:53:47 compute-0 sudo[167688]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:48 compute-0 python3.9[167690]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^(blacklist {) replace=\1\n} backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:53:48 compute-0 sudo[167688]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:48 compute-0 podman[167814]: 2026-01-05 20:53:48.668336562 +0000 UTC m=+0.077317410 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 05 20:53:48 compute-0 sudo[167859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qmaqqwtoukkhiunapboctwrrjoasobxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646428.2853153-273-90492156334515/AnsiballZ_replace.py'
Jan 05 20:53:48 compute-0 sudo[167859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:48 compute-0 python3.9[167863]: ansible-ansible.builtin.replace Invoked with path=/etc/multipath.conf regexp=^blacklist\s*{\n[\s]+devnode \"\.\*\" replace=blacklist { backup=False encoding=utf-8 unsafe_writes=False after=None before=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:53:48 compute-0 sudo[167859]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:49 compute-0 sudo[168013]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rrphposvqoljcipiejiegivjoogmfomo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646429.1339474-282-230361472761870/AnsiballZ_lineinfile.py'
Jan 05 20:53:49 compute-0 sudo[168013]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:49 compute-0 python3.9[168015]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        find_multipaths yes path=/etc/multipath.conf regexp=^\s+find_multipaths state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:53:49 compute-0 sudo[168013]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:50 compute-0 sudo[168165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kcffykxglidocrdxozswwuflbgmerbhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646429.9587967-282-106477769951048/AnsiballZ_lineinfile.py'
Jan 05 20:53:50 compute-0 sudo[168165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:50 compute-0 python3.9[168167]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        recheck_wwid yes path=/etc/multipath.conf regexp=^\s+recheck_wwid state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:53:50 compute-0 sudo[168165]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:51 compute-0 sudo[168317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyfglfvpenhjeyluiirxzdyilicyxckz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646430.7591376-282-274901932771653/AnsiballZ_lineinfile.py'
Jan 05 20:53:51 compute-0 sudo[168317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:51 compute-0 python3.9[168319]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        skip_kpartx yes path=/etc/multipath.conf regexp=^\s+skip_kpartx state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:53:51 compute-0 sudo[168317]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:52 compute-0 sudo[168469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-otqcapwuuybuyyglkcivexzkkonzdmtf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646431.648799-282-110017975928884/AnsiballZ_lineinfile.py'
Jan 05 20:53:52 compute-0 sudo[168469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:52 compute-0 python3.9[168471]: ansible-ansible.builtin.lineinfile Invoked with firstmatch=True insertafter=^defaults line=        user_friendly_names no path=/etc/multipath.conf regexp=^\s+user_friendly_names state=present encoding=utf-8 backrefs=False create=False backup=False unsafe_writes=False search_string=None insertbefore=None validate=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:53:52 compute-0 sudo[168469]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:52 compute-0 sudo[168621]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kntbxmosrxikmtxfxuaisksduronrnzw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646432.4727635-311-13260117038312/AnsiballZ_stat.py'
Jan 05 20:53:52 compute-0 sudo[168621]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:53 compute-0 python3.9[168623]: ansible-ansible.builtin.stat Invoked with path=/etc/multipath.conf follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:53:53 compute-0 sudo[168621]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:53 compute-0 sudo[168775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-edxcawskzyhusekeymbobgakoehrpahd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646433.3975859-319-247297118738667/AnsiballZ_command.py'
Jan 05 20:53:53 compute-0 sudo[168775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:53 compute-0 python3.9[168777]: ansible-ansible.legacy.command Invoked with _raw_params=/usr/bin/true _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:53:54 compute-0 sudo[168775]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:54 compute-0 sudo[168928]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-etgdbqwgwlilogspjawmymzkpgpoexsm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646434.3419313-328-212346587599955/AnsiballZ_systemd_service.py'
Jan 05 20:53:54 compute-0 sudo[168928]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:55 compute-0 python3.9[168930]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:53:55 compute-0 systemd[1]: Listening on multipathd control socket.
Jan 05 20:53:55 compute-0 sudo[168928]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:55 compute-0 sudo[169084]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdmvzeqpwlqvqimqvikskxbygzqriwfj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646435.5142224-336-198504000956166/AnsiballZ_systemd_service.py'
Jan 05 20:53:55 compute-0 sudo[169084]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:56 compute-0 python3.9[169086]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=multipathd state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:53:56 compute-0 systemd[1]: Starting Wait for udev To Complete Device Initialization...
Jan 05 20:53:56 compute-0 udevadm[169091]: systemd-udev-settle.service is deprecated. Please fix multipathd.service not to pull it in.
Jan 05 20:53:56 compute-0 systemd[1]: Finished Wait for udev To Complete Device Initialization.
Jan 05 20:53:56 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 05 20:53:56 compute-0 multipathd[169094]: --------start up--------
Jan 05 20:53:56 compute-0 multipathd[169094]: read /etc/multipath.conf
Jan 05 20:53:56 compute-0 multipathd[169094]: path checkers start up
Jan 05 20:53:56 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 05 20:53:56 compute-0 sudo[169084]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:57 compute-0 sudo[169251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mdnkgtdohbrgpswsfrvfaqrilftvgphz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646436.9967124-348-144928886910665/AnsiballZ_file.py'
Jan 05 20:53:57 compute-0 sudo[169251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:57 compute-0 python3.9[169253]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/modules-load.d selevel=s0 setype=etc_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None attributes=None
Jan 05 20:53:57 compute-0 sudo[169251]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:58 compute-0 sudo[169403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vxgxvelyumjwjxobnoenwajmdtetpjcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646437.8819525-356-126568289939565/AnsiballZ_modprobe.py'
Jan 05 20:53:58 compute-0 sudo[169403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:58 compute-0 python3.9[169405]: ansible-community.general.modprobe Invoked with name=nvme-fabrics state=present params= persistent=disabled
Jan 05 20:53:58 compute-0 kernel: Key type psk registered
Jan 05 20:53:58 compute-0 sudo[169403]: pam_unix(sudo:session): session closed for user root
Jan 05 20:53:59 compute-0 sudo[169568]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aepoosmflmxpuiiidbsukhjlmfrblbmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646438.882605-364-188341836869882/AnsiballZ_stat.py'
Jan 05 20:53:59 compute-0 sudo[169568]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:53:59 compute-0 python3.9[169570]: ansible-ansible.legacy.stat Invoked with path=/etc/modules-load.d/nvme-fabrics.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:53:59 compute-0 sudo[169568]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:00 compute-0 sudo[169691]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tcaiompwduzisiwafvzsosgrujuthckh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646438.882605-364-188341836869882/AnsiballZ_copy.py'
Jan 05 20:54:00 compute-0 sudo[169691]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:00 compute-0 python3.9[169693]: ansible-ansible.legacy.copy Invoked with dest=/etc/modules-load.d/nvme-fabrics.conf mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646438.882605-364-188341836869882/.source.conf follow=False _original_basename=module-load.conf.j2 checksum=783c778f0c68cc414f35486f234cbb1cf3f9bbff backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:00 compute-0 sudo[169691]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:01 compute-0 sudo[169843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wmgsnqtyjwqnccyzvxzgrvbgipjowbav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646440.61587-380-232764130206526/AnsiballZ_lineinfile.py'
Jan 05 20:54:01 compute-0 sudo[169843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:01 compute-0 python3.9[169845]: ansible-ansible.builtin.lineinfile Invoked with create=True dest=/etc/modules line=nvme-fabrics  mode=0644 state=present path=/etc/modules encoding=utf-8 backrefs=False backup=False firstmatch=False unsafe_writes=False regexp=None search_string=None insertafter=None insertbefore=None validate=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:01 compute-0 sudo[169843]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:01 compute-0 sudo[169995]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfflraywgcsygncddcqnwnmfsjrrmfbc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646441.4506097-388-25863954703869/AnsiballZ_systemd.py'
Jan 05 20:54:01 compute-0 sudo[169995]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:02 compute-0 python3.9[169997]: ansible-ansible.builtin.systemd Invoked with name=systemd-modules-load.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:54:02 compute-0 systemd[1]: systemd-modules-load.service: Deactivated successfully.
Jan 05 20:54:02 compute-0 systemd[1]: Stopped Load Kernel Modules.
Jan 05 20:54:02 compute-0 systemd[1]: Stopping Load Kernel Modules...
Jan 05 20:54:02 compute-0 systemd[1]: Starting Load Kernel Modules...
Jan 05 20:54:02 compute-0 systemd[1]: Finished Load Kernel Modules.
Jan 05 20:54:02 compute-0 sudo[169995]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:02 compute-0 sudo[170151]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-scobanssqlwkovtqaddxltcpucueabeu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646442.5815847-396-150786330122145/AnsiballZ_dnf.py'
Jan 05 20:54:02 compute-0 sudo[170151]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:03 compute-0 python3.9[170153]: ansible-ansible.legacy.dnf Invoked with name=['nvme-cli'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 20:54:05 compute-0 systemd[1]: Reloading.
Jan 05 20:54:05 compute-0 systemd-sysv-generator[170188]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:54:05 compute-0 systemd-rc-local-generator[170182]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:54:05 compute-0 systemd[1]: Reloading.
Jan 05 20:54:05 compute-0 systemd-rc-local-generator[170221]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:54:05 compute-0 systemd-sysv-generator[170226]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:54:06 compute-0 virtsecretd[157530]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Jan 05 20:54:06 compute-0 virtsecretd[157530]: hostname: compute-0
Jan 05 20:54:06 compute-0 virtsecretd[157530]: nl_recv returned with error: No buffer space available
Jan 05 20:54:06 compute-0 systemd-logind[788]: Watching system buttons on /dev/input/event0 (Power Button)
Jan 05 20:54:06 compute-0 systemd-logind[788]: Watching system buttons on /dev/input/event1 (AT Translated Set 2 keyboard)
Jan 05 20:54:06 compute-0 systemd[1]: Started /usr/bin/systemctl start man-db-cache-update.
Jan 05 20:54:06 compute-0 systemd[1]: Starting man-db-cache-update.service...
Jan 05 20:54:06 compute-0 systemd[1]: Reloading.
Jan 05 20:54:06 compute-0 systemd-rc-local-generator[170318]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:54:06 compute-0 systemd-sysv-generator[170322]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:54:06 compute-0 systemd[1]: Queuing reload/restart jobs for marked units…
Jan 05 20:54:07 compute-0 sudo[170151]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:08 compute-0 sudo[171469]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drsdyubsjxzgwgldbkyashazasubajxi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646447.6211808-404-40842559619947/AnsiballZ_systemd_service.py'
Jan 05 20:54:08 compute-0 sudo[171469]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:08 compute-0 python3.9[171496]: ansible-ansible.builtin.systemd_service Invoked with name=iscsid state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:54:08 compute-0 systemd[1]: Stopping Open-iSCSI...
Jan 05 20:54:08 compute-0 iscsid[165121]: iscsid shutting down.
Jan 05 20:54:08 compute-0 systemd[1]: iscsid.service: Deactivated successfully.
Jan 05 20:54:08 compute-0 systemd[1]: Stopped Open-iSCSI.
Jan 05 20:54:08 compute-0 systemd[1]: One time configuration for iscsi.service was skipped because of an unmet condition check (ConditionPathExists=!/etc/iscsi/initiatorname.iscsi).
Jan 05 20:54:08 compute-0 systemd[1]: Starting Open-iSCSI...
Jan 05 20:54:08 compute-0 systemd[1]: Started Open-iSCSI.
Jan 05 20:54:08 compute-0 sudo[171469]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:08 compute-0 systemd[1]: man-db-cache-update.service: Deactivated successfully.
Jan 05 20:54:08 compute-0 systemd[1]: Finished man-db-cache-update.service.
Jan 05 20:54:08 compute-0 systemd[1]: man-db-cache-update.service: Consumed 2.370s CPU time.
Jan 05 20:54:08 compute-0 systemd[1]: run-raeaec0cf3e3d48d195e4e1b567b2a2d7.service: Deactivated successfully.
Jan 05 20:54:09 compute-0 sudo[171775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqomcdzgdpmjvhrvmbkiezuwymvjcsja ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646448.750473-412-257939130693079/AnsiballZ_systemd_service.py'
Jan 05 20:54:09 compute-0 sudo[171775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:09 compute-0 python3.9[171777]: ansible-ansible.builtin.systemd_service Invoked with name=multipathd state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:54:09 compute-0 systemd[1]: Stopping Device-Mapper Multipath Device Controller...
Jan 05 20:54:09 compute-0 multipathd[169094]: exit (signal)
Jan 05 20:54:09 compute-0 multipathd[169094]: --------shut down-------
Jan 05 20:54:09 compute-0 systemd[1]: multipathd.service: Deactivated successfully.
Jan 05 20:54:09 compute-0 systemd[1]: Stopped Device-Mapper Multipath Device Controller.
Jan 05 20:54:09 compute-0 systemd[1]: Starting Device-Mapper Multipath Device Controller...
Jan 05 20:54:09 compute-0 multipathd[171783]: --------start up--------
Jan 05 20:54:09 compute-0 multipathd[171783]: read /etc/multipath.conf
Jan 05 20:54:09 compute-0 multipathd[171783]: path checkers start up
Jan 05 20:54:09 compute-0 systemd[1]: Started Device-Mapper Multipath Device Controller.
Jan 05 20:54:09 compute-0 sudo[171775]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:10 compute-0 python3.9[171940]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:54:11 compute-0 sudo[172094]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brrtyyokyjxykmoewubgplomsevahylh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646451.3208473-430-242618887271704/AnsiballZ_file.py'
Jan 05 20:54:11 compute-0 sudo[172094]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:11 compute-0 python3.9[172096]: ansible-ansible.builtin.file Invoked with mode=0644 path=/etc/ssh/ssh_known_hosts state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:11 compute-0 sudo[172094]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:12 compute-0 sudo[172268]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-renhcrocierflnksllxxplarpsqqpqki ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646452.394286-441-179079252102739/AnsiballZ_systemd_service.py'
Jan 05 20:54:12 compute-0 sudo[172268]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:12 compute-0 podman[172197]: 2026-01-05 20:54:12.824474237 +0000 UTC m=+0.153773578 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.schema-version=1.0, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 20:54:13 compute-0 python3.9[172274]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 20:54:13 compute-0 systemd[1]: Reloading.
Jan 05 20:54:13 compute-0 systemd-rc-local-generator[172303]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:54:13 compute-0 systemd-sysv-generator[172306]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:54:13 compute-0 sudo[172268]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:14 compute-0 python3.9[172460]: ansible-ansible.builtin.service_facts Invoked
Jan 05 20:54:14 compute-0 network[172477]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 05 20:54:14 compute-0 network[172478]: 'network-scripts' will be removed from distribution in near future.
Jan 05 20:54:14 compute-0 network[172479]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 05 20:54:15 compute-0 systemd[1]: virtnodedevd.service: Deactivated successfully.
Jan 05 20:54:17 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 05 20:54:18 compute-0 systemd[1]: virtqemud.service: Deactivated successfully.
Jan 05 20:54:18 compute-0 podman[172593]: 2026-01-05 20:54:18.843342786 +0000 UTC m=+0.110273287 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 20:54:19 compute-0 systemd[1]: virtsecretd.service: Deactivated successfully.
Jan 05 20:54:20 compute-0 sudo[172772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hqvzbccxfxnsooispvbgbcepgkofsipm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646459.4298692-460-136587154849208/AnsiballZ_systemd_service.py'
Jan 05 20:54:20 compute-0 sudo[172772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:20 compute-0 python3.9[172774]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:54:20 compute-0 sudo[172772]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:21 compute-0 sudo[172925]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vacvywyszoytmycydpcjspakgzyyatim ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646460.5718918-460-163683598272257/AnsiballZ_systemd_service.py'
Jan 05 20:54:21 compute-0 sudo[172925]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:21 compute-0 python3.9[172927]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_migration_target.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:54:21 compute-0 sudo[172925]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:21 compute-0 sudo[173078]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zieuqfjvdculhefwnuuzemuuexhqdlys ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646461.575038-460-50030594091971/AnsiballZ_systemd_service.py'
Jan 05 20:54:21 compute-0 sudo[173078]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:22 compute-0 python3.9[173080]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api_cron.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:54:22 compute-0 sudo[173078]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:22 compute-0 sudo[173231]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jubaxosvwzniemcounlrstdaoqnyuhwp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646462.576645-460-66877327475170/AnsiballZ_systemd_service.py'
Jan 05 20:54:22 compute-0 sudo[173231]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:23 compute-0 python3.9[173233]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_api.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:54:23 compute-0 sudo[173231]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:23 compute-0 sudo[173384]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-waprklhlnmonnrnwtrtwjyvykrvdixhz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646463.531327-460-192793217280635/AnsiballZ_systemd_service.py'
Jan 05 20:54:23 compute-0 sudo[173384]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:24 compute-0 python3.9[173386]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_conductor.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:54:24 compute-0 sudo[173384]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:24 compute-0 sudo[173537]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wkdpiwknndinulvmmeadblymycrwbrot ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646464.4572814-460-268393422761785/AnsiballZ_systemd_service.py'
Jan 05 20:54:24 compute-0 sudo[173537]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:25 compute-0 python3.9[173539]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_metadata.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:54:25 compute-0 sudo[173537]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:25 compute-0 sudo[173690]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rqgtzamwrkbmzsyhiivrfhvyqzqbekjq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646465.3590963-460-181276153398406/AnsiballZ_systemd_service.py'
Jan 05 20:54:25 compute-0 sudo[173690]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:26 compute-0 python3.9[173692]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_scheduler.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:54:26 compute-0 sudo[173690]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:26 compute-0 sudo[173843]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yuzuaqoyfvvaaggwnmiqkonykoobdorh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646466.4085011-460-17318625978158/AnsiballZ_systemd_service.py'
Jan 05 20:54:26 compute-0 sudo[173843]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:27 compute-0 python3.9[173845]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_nova_vnc_proxy.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:54:27 compute-0 sudo[173843]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:28 compute-0 sudo[173996]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pqclwbcfaygnjvdpengryoxmeyccwkst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646467.7458599-519-261031734020711/AnsiballZ_file.py'
Jan 05 20:54:28 compute-0 sudo[173996]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:28 compute-0 python3.9[173998]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:28 compute-0 sudo[173996]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:28 compute-0 sudo[174148]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-szzvlvljewblpfobvjuvlrdhxdwxmkrf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646468.6120257-519-238485093772203/AnsiballZ_file.py'
Jan 05 20:54:28 compute-0 sudo[174148]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:29 compute-0 python3.9[174150]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:29 compute-0 sudo[174148]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:29 compute-0 sudo[174300]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpobksvdcbhdltxkihcywddticmwaixm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646469.402053-519-127784795230347/AnsiballZ_file.py'
Jan 05 20:54:29 compute-0 sudo[174300]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:30 compute-0 python3.9[174302]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:30 compute-0 sudo[174300]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:30 compute-0 sudo[174452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zidynxeztijxvegpkiipljbkghqwlchg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646470.2320473-519-202936336154659/AnsiballZ_file.py'
Jan 05 20:54:30 compute-0 sudo[174452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:30 compute-0 python3.9[174454]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:30 compute-0 sudo[174452]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:31 compute-0 sudo[174604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gejmxptyumcxaabfqrhijubclkzzggmc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646471.0711706-519-125715659511542/AnsiballZ_file.py'
Jan 05 20:54:31 compute-0 sudo[174604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:31 compute-0 python3.9[174606]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:31 compute-0 sudo[174604]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:32 compute-0 sudo[174756]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sehfhlhvgwghsetfhgtgkmifekoehzyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646472.0108783-519-114684832422450/AnsiballZ_file.py'
Jan 05 20:54:32 compute-0 sudo[174756]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:32 compute-0 python3.9[174758]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:32 compute-0 sudo[174756]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:33 compute-0 sudo[174908]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-brqtlbwjfjfewsqcughtibcdmxrufral ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646472.8190236-519-146105398259373/AnsiballZ_file.py'
Jan 05 20:54:33 compute-0 sudo[174908]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:33 compute-0 python3.9[174910]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:33 compute-0 sudo[174908]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:34 compute-0 sudo[175060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pflqfmyhbsoghevzdjmltemomemxagbm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646473.664371-519-97175228877752/AnsiballZ_file.py'
Jan 05 20:54:34 compute-0 sudo[175060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:34 compute-0 python3.9[175062]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:34 compute-0 sudo[175060]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:34 compute-0 sudo[175212]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhhogisrrunqfdafjmmrrjtixxqewfhc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646474.5584738-576-133696508935434/AnsiballZ_file.py'
Jan 05 20:54:34 compute-0 sudo[175212]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:35 compute-0 python3.9[175214]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:35 compute-0 sudo[175212]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:35 compute-0 sudo[175364]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shtrlbqfsemlvtqcwzakivdeunrnvran ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646475.3642383-576-69348788378857/AnsiballZ_file.py'
Jan 05 20:54:35 compute-0 sudo[175364]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:35 compute-0 python3.9[175366]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_migration_target.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:35 compute-0 sudo[175364]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:36 compute-0 sudo[175516]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vcodsxdygvwwffrvtvaxjyjoatrnvlpr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646476.1685004-576-83526812872658/AnsiballZ_file.py'
Jan 05 20:54:36 compute-0 sudo[175516]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:36 compute-0 python3.9[175518]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api_cron.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:36 compute-0 sudo[175516]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:37 compute-0 sudo[175668]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vihdejcqtarcqxsqngynacgaiinpgolk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646476.9785182-576-98654780844717/AnsiballZ_file.py'
Jan 05 20:54:37 compute-0 sudo[175668]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:37 compute-0 python3.9[175670]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_api.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:37 compute-0 sudo[175668]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:38 compute-0 sudo[175820]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ruksmyvllvogdzzkkrovzhwdrpndpxmr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646477.8177788-576-188689269847636/AnsiballZ_file.py'
Jan 05 20:54:38 compute-0 sudo[175820]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:38 compute-0 python3.9[175822]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_conductor.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:38 compute-0 sudo[175820]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:39 compute-0 sudo[175972]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kksvbzzoezbvdkjndfurcvmflpuwasbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646478.667912-576-23367673320744/AnsiballZ_file.py'
Jan 05 20:54:39 compute-0 sudo[175972]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:39 compute-0 python3.9[175974]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_metadata.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:39 compute-0 sudo[175972]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:39 compute-0 sudo[176124]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qqvfvrlcqnznlopdxknkvbjqixzgehgp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646479.4410362-576-69289262845115/AnsiballZ_file.py'
Jan 05 20:54:39 compute-0 sudo[176124]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:40 compute-0 python3.9[176126]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_scheduler.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:40 compute-0 sudo[176124]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:40 compute-0 sudo[176276]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iaobpolsgsyfaowddmuswhmwurvlwann ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646480.2322097-576-182739346434864/AnsiballZ_file.py'
Jan 05 20:54:40 compute-0 sudo[176276]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:40 compute-0 python3.9[176278]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_nova_vnc_proxy.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:54:40 compute-0 sudo[176276]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:41 compute-0 sudo[176428]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fwsapbinraawbwcbllgefusjcglfpjip ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646481.2596676-634-218463493499102/AnsiballZ_command.py'
Jan 05 20:54:41 compute-0 sudo[176428]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:41 compute-0 python3.9[176430]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:54:41 compute-0 sudo[176428]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:42 compute-0 python3.9[176582]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 05 20:54:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:54:42.821 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:54:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:54:42.821 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:54:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:54:42.821 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:54:43 compute-0 sudo[176750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-raprasphkdrrxufbliwearcekwvefmhd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646483.008039-652-117997234408959/AnsiballZ_systemd_service.py'
Jan 05 20:54:43 compute-0 sudo[176750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:43 compute-0 podman[176706]: 2026-01-05 20:54:43.474463703 +0000 UTC m=+0.138067369 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 05 20:54:43 compute-0 python3.9[176757]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 20:54:43 compute-0 systemd[1]: Reloading.
Jan 05 20:54:43 compute-0 systemd-rc-local-generator[176787]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:54:43 compute-0 systemd-sysv-generator[176790]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:54:44 compute-0 sudo[176750]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:44 compute-0 sudo[176946]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jcigfuwjvipbcvmfxgvwhgymmnqlsuje ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646484.235944-660-26269755617263/AnsiballZ_command.py'
Jan 05 20:54:44 compute-0 sudo[176946]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:44 compute-0 python3.9[176948]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:54:44 compute-0 sudo[176946]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:45 compute-0 sudo[177099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dzfjhprxtezzczzhodjnezezurnypypu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646485.1164534-660-265297581862297/AnsiballZ_command.py'
Jan 05 20:54:45 compute-0 sudo[177099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:45 compute-0 python3.9[177101]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_migration_target.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:54:45 compute-0 sudo[177099]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:46 compute-0 sudo[177252]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-udnjiaoybtnxupcwvpoopeytrkeayqec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646485.8835006-660-235603049492171/AnsiballZ_command.py'
Jan 05 20:54:46 compute-0 sudo[177252]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:46 compute-0 python3.9[177254]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api_cron.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:54:47 compute-0 sudo[177252]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:48 compute-0 sudo[177405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hxqkxkqpfnowukbjhkdweolgtzvgtezs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646487.6767738-660-165852760933541/AnsiballZ_command.py'
Jan 05 20:54:48 compute-0 sudo[177405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:48 compute-0 python3.9[177407]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_api.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:54:48 compute-0 sudo[177405]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:49 compute-0 sudo[177569]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hgtwmyeppeelojkpfrxsaymszphlndkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646488.588722-660-222015292814849/AnsiballZ_command.py'
Jan 05 20:54:49 compute-0 sudo[177569]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:49 compute-0 podman[177532]: 2026-01-05 20:54:49.144056924 +0000 UTC m=+0.094164975 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 05 20:54:49 compute-0 python3.9[177577]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_conductor.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:54:49 compute-0 sudo[177569]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:49 compute-0 sudo[177730]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-omrkhbnociznrvnriueztjciuaojjrsb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646489.554669-660-195146068033052/AnsiballZ_command.py'
Jan 05 20:54:49 compute-0 sudo[177730]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:50 compute-0 python3.9[177732]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_metadata.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:54:50 compute-0 sudo[177730]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:50 compute-0 sudo[177883]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-krxlpggdmapjeqrrxdkmnmcaurkubonp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646490.444809-660-169193921102576/AnsiballZ_command.py'
Jan 05 20:54:50 compute-0 sudo[177883]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:51 compute-0 python3.9[177885]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_scheduler.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:54:51 compute-0 sudo[177883]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:51 compute-0 sudo[178036]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvvcdvvhxdmpgrzozgcugiaatmjtitps ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646491.350978-660-71020627983044/AnsiballZ_command.py'
Jan 05 20:54:51 compute-0 sudo[178036]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:51 compute-0 python3.9[178038]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_nova_vnc_proxy.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:54:51 compute-0 sudo[178036]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:53 compute-0 sudo[178189]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rdughflizmbcwofkdoiddikvtppiwkii ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646493.2744882-739-40252565032893/AnsiballZ_file.py'
Jan 05 20:54:53 compute-0 sudo[178189]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:53 compute-0 python3.9[178191]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:54:53 compute-0 sudo[178189]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:54 compute-0 sudo[178341]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fafuuhiqovonlzatwulpifhaogycusct ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646494.1000981-739-2892585701128/AnsiballZ_file.py'
Jan 05 20:54:54 compute-0 sudo[178341]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:54 compute-0 python3.9[178343]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/containers setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:54:54 compute-0 sudo[178341]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:55 compute-0 sudo[178493]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mqrovbqkdameyfdswpdnkdnbveyojiiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646494.9339602-739-97218919175756/AnsiballZ_file.py'
Jan 05 20:54:55 compute-0 sudo[178493]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:55 compute-0 python3.9[178495]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/config/nova_nvme_cleaner setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:54:55 compute-0 sudo[178493]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:56 compute-0 sudo[178645]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjbbqghbgkvjvahhabnuvtwimshrabfv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646495.7673972-761-87280333372876/AnsiballZ_file.py'
Jan 05 20:54:56 compute-0 sudo[178645]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:56 compute-0 python3.9[178647]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:54:56 compute-0 sudo[178645]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:56 compute-0 sudo[178797]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gfojshpzfdqeytjmxemehitgxnxsnrhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646496.603477-761-123138307591763/AnsiballZ_file.py'
Jan 05 20:54:56 compute-0 sudo[178797]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:57 compute-0 python3.9[178799]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/_nova_secontext setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:54:57 compute-0 sudo[178797]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:57 compute-0 sudo[178949]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iqhouhqpjkxjtxgsxpfszbjimfnupmme ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646497.4164915-761-183576707284161/AnsiballZ_file.py'
Jan 05 20:54:57 compute-0 sudo[178949]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:58 compute-0 python3.9[178951]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/nova/instances setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:54:58 compute-0 sudo[178949]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:58 compute-0 sudo[179101]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hluwxmsknqgadyegtknpvcauerlvoqch ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646498.3277612-761-165386110811377/AnsiballZ_file.py'
Jan 05 20:54:58 compute-0 sudo[179101]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:58 compute-0 python3.9[179103]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/etc/ceph setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:54:58 compute-0 sudo[179101]: pam_unix(sudo:session): session closed for user root
Jan 05 20:54:59 compute-0 sudo[179253]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hcmneldinjvhqxvkjleddnwuahjrouua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646499.1718986-761-86021908410383/AnsiballZ_file.py'
Jan 05 20:54:59 compute-0 sudo[179253]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:54:59 compute-0 python3.9[179255]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/multipath setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:54:59 compute-0 sudo[179253]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:00 compute-0 sudo[179405]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xjqqtpnekqxejvexebwcrwjvvnlwvhgk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646499.9773593-761-197460149188241/AnsiballZ_file.py'
Jan 05 20:55:00 compute-0 sudo[179405]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:00 compute-0 python3.9[179407]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/etc/nvme setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:55:00 compute-0 sudo[179405]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:01 compute-0 sudo[179557]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhdwkbtluqiqxzchsytjtnbmkqoeties ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646500.9459212-761-165114786691705/AnsiballZ_file.py'
Jan 05 20:55:01 compute-0 sudo[179557]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:01 compute-0 python3.9[179559]: ansible-ansible.builtin.file Invoked with group=zuul owner=zuul path=/run/openvswitch setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:55:01 compute-0 sudo[179557]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:06 compute-0 sudo[179709]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pbqmaykycxyeiopgnjieknnpffwteltq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646506.124114-930-244442931016663/AnsiballZ_getent.py'
Jan 05 20:55:06 compute-0 sudo[179709]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:06 compute-0 python3.9[179711]: ansible-ansible.builtin.getent Invoked with database=passwd key=nova fail_key=True service=None split=None
Jan 05 20:55:06 compute-0 sudo[179709]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:07 compute-0 sudo[179862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skdibexpyrrvthjfpdmwmasolxajouge ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646507.2044458-938-82418575460537/AnsiballZ_group.py'
Jan 05 20:55:07 compute-0 sudo[179862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:08 compute-0 python3.9[179864]: ansible-ansible.builtin.group Invoked with gid=42436 name=nova state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 05 20:55:08 compute-0 groupadd[179865]: group added to /etc/group: name=nova, GID=42436
Jan 05 20:55:08 compute-0 groupadd[179865]: group added to /etc/gshadow: name=nova
Jan 05 20:55:08 compute-0 groupadd[179865]: new group: name=nova, GID=42436
Jan 05 20:55:08 compute-0 sudo[179862]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:09 compute-0 sudo[180020]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgyppddmdkgolyugutslnkxbkrypdyvs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646508.4540513-946-153181798529015/AnsiballZ_user.py'
Jan 05 20:55:09 compute-0 sudo[180020]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:09 compute-0 python3.9[180022]: ansible-ansible.builtin.user Invoked with comment=nova user group=nova groups=['libvirt'] name=nova shell=/bin/sh state=present uid=42436 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 05 20:55:09 compute-0 useradd[180024]: new user: name=nova, UID=42436, GID=42436, home=/home/nova, shell=/bin/sh, from=/dev/pts/0
Jan 05 20:55:09 compute-0 useradd[180024]: add 'nova' to group 'libvirt'
Jan 05 20:55:09 compute-0 useradd[180024]: add 'nova' to shadow group 'libvirt'
Jan 05 20:55:09 compute-0 sudo[180020]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:10 compute-0 sshd-session[180055]: Accepted publickey for zuul from 192.168.122.30 port 45180 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:55:10 compute-0 systemd-logind[788]: New session 24 of user zuul.
Jan 05 20:55:10 compute-0 systemd[1]: Started Session 24 of User zuul.
Jan 05 20:55:10 compute-0 sshd-session[180055]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:55:10 compute-0 sshd-session[180058]: Received disconnect from 192.168.122.30 port 45180:11: disconnected by user
Jan 05 20:55:10 compute-0 sshd-session[180058]: Disconnected from user zuul 192.168.122.30 port 45180
Jan 05 20:55:10 compute-0 sshd-session[180055]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:55:10 compute-0 systemd[1]: session-24.scope: Deactivated successfully.
Jan 05 20:55:10 compute-0 systemd-logind[788]: Session 24 logged out. Waiting for processes to exit.
Jan 05 20:55:10 compute-0 systemd-logind[788]: Removed session 24.
Jan 05 20:55:11 compute-0 python3.9[180208]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/config.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:55:12 compute-0 python3.9[180329]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/config.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646510.8652723-971-204518462951891/.source.json follow=False _original_basename=config.json.j2 checksum=b51012bfb0ca26296dcf3793a2f284446fb1395e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:55:13 compute-0 python3.9[180479]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova-blank.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:55:13 compute-0 python3.9[180555]: ansible-ansible.legacy.file Invoked with mode=0644 setype=container_file_t dest=/var/lib/openstack/config/nova/nova-blank.conf _original_basename=nova-blank.conf recurse=False state=file path=/var/lib/openstack/config/nova/nova-blank.conf force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:55:13 compute-0 podman[180556]: 2026-01-05 20:55:13.748147089 +0000 UTC m=+0.137378571 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 05 20:55:14 compute-0 python3.9[180729]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/ssh-config follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:55:15 compute-0 python3.9[180850]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/ssh-config mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646513.8151553-971-232020256151684/.source follow=False _original_basename=ssh-config checksum=4297f735c41bdc1ff52d72e6f623a02242f37958 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:55:15 compute-0 python3.9[181000]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/02-nova-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:55:16 compute-0 python3.9[181121]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/02-nova-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646515.3312578-971-203010874555805/.source.conf follow=False _original_basename=02-nova-host-specific.conf.j2 checksum=1feba546d0beacad9258164ab79b8a747685ccc8 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:55:17 compute-0 python3.9[181271]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/nova_statedir_ownership.py follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:55:18 compute-0 python3.9[181392]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/nova_statedir_ownership.py mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646516.8120167-971-177579028563470/.source.py follow=False _original_basename=nova_statedir_ownership.py checksum=c6c8a3cfefa5efd60ceb1408c4e977becedb71e2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:55:18 compute-0 python3.9[181542]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/nova/run-on-host follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:55:19 compute-0 podman[181637]: 2026-01-05 20:55:19.497006796 +0000 UTC m=+0.083580779 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 05 20:55:19 compute-0 python3.9[181682]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/nova/run-on-host mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646518.3868957-971-110888811845227/.source follow=False _original_basename=run-on-host checksum=93aba8edc83d5878604a66d37fea2f12b60bdea2 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:55:20 compute-0 sudo[181834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aatznofxyooahdrlmhutuxfvnivcsxfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646519.9777694-1054-90804680842968/AnsiballZ_file.py'
Jan 05 20:55:20 compute-0 sudo[181834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:20 compute-0 python3.9[181836]: ansible-ansible.builtin.file Invoked with group=nova mode=0700 owner=nova path=/home/nova/.ssh state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:55:20 compute-0 sudo[181834]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:21 compute-0 sudo[181986]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nwqqppaptithkeezkzrprpxqwuqywgyf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646520.9098616-1062-177969859407207/AnsiballZ_copy.py'
Jan 05 20:55:21 compute-0 sudo[181986]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:21 compute-0 python3.9[181988]: ansible-ansible.legacy.copy Invoked with dest=/home/nova/.ssh/authorized_keys group=nova mode=0600 owner=nova remote_src=True src=/var/lib/openstack/config/nova/ssh-publickey backup=False force=True follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:55:21 compute-0 sudo[181986]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:22 compute-0 sudo[182138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qvzhldnzidkmtzvqcsjhurthpkmykbgc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646521.813609-1070-239249447923164/AnsiballZ_stat.py'
Jan 05 20:55:22 compute-0 sudo[182138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:22 compute-0 python3.9[182140]: ansible-ansible.builtin.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:55:22 compute-0 sudo[182138]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:23 compute-0 sudo[182290]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hpcvoimsalwqjhtwmgsaywkkjivouppj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646522.8652196-1078-134073304306249/AnsiballZ_stat.py'
Jan 05 20:55:23 compute-0 sudo[182290]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:23 compute-0 python3.9[182292]: ansible-ansible.legacy.stat Invoked with path=/var/lib/nova/compute_id follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:55:23 compute-0 sudo[182290]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:24 compute-0 sudo[182413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ccdveolaxabdbknvonacevuaynvfjkbg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646522.8652196-1078-134073304306249/AnsiballZ_copy.py'
Jan 05 20:55:24 compute-0 sudo[182413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:24 compute-0 python3.9[182415]: ansible-ansible.legacy.copy Invoked with attributes=+i dest=/var/lib/nova/compute_id group=nova mode=0400 owner=nova src=/home/zuul/.ansible/tmp/ansible-tmp-1767646522.8652196-1078-134073304306249/.source _original_basename=.jevd2uh_ follow=False checksum=a35ac3292bfb01f54341977913a832d0fc3cb4e4 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None
Jan 05 20:55:24 compute-0 sudo[182413]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:25 compute-0 python3.9[182567]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:55:26 compute-0 python3.9[182719]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:55:26 compute-0 python3.9[182840]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute.json mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646525.471465-1104-181087228307314/.source.json follow=False _original_basename=nova_compute.json.j2 checksum=aff5546b44cf4461a7541a94e4cce1332c9b58b0 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:55:27 compute-0 python3.9[182990]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/config/containers/nova_compute_init.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:55:28 compute-0 python3.9[183111]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/config/containers/nova_compute_init.json mode=0700 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646527.103837-1119-140474370205004/.source.json follow=False _original_basename=nova_compute_init.json.j2 checksum=60b024e6db49dc6e700fc0d50263944d98d4c034 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:55:29 compute-0 sudo[183261]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-favqyltemymkgmaqvaaoafmekozlzxux ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646528.796201-1136-83281470829543/AnsiballZ_container_config_data.py'
Jan 05 20:55:29 compute-0 sudo[183261]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:29 compute-0 python3.9[183263]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute_init.json debug=False
Jan 05 20:55:29 compute-0 sudo[183261]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:30 compute-0 sudo[183413]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtrrrkojibatgcmdfrcnmhoetkpvodll ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646530.1052682-1147-103862889607353/AnsiballZ_container_config_hash.py'
Jan 05 20:55:30 compute-0 sudo[183413]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:30 compute-0 python3.9[183415]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 20:55:30 compute-0 sudo[183413]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:31 compute-0 sudo[183565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pajmzhbojhwkupgfatwwvziqshlqdhbl ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646531.3271413-1157-101390885651623/AnsiballZ_edpm_container_manage.py'
Jan 05 20:55:31 compute-0 sudo[183565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:32 compute-0 python3[183567]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute_init.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 20:55:32 compute-0 podman[183603]: 2026-01-05 20:55:32.580451198 +0000 UTC m=+0.075855435 container create 2ae209b2f8054863215b3eb022354533f0b4a243cc160d48b924804b03e54142 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, container_name=nova_compute_init, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, org.label-schema.schema-version=1.0, config_id=edpm, managed_by=edpm_ansible)
Jan 05 20:55:32 compute-0 podman[183603]: 2026-01-05 20:55:32.547925122 +0000 UTC m=+0.043329339 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 05 20:55:32 compute-0 python3[183567]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute_init --conmon-pidfile /run/nova_compute_init.pid --env NOVA_STATEDIR_OWNERSHIP_SKIP=/var/lib/nova/compute_id --env __OS_DEBUG=False --label config_id=edpm --label container_name=nova_compute_init --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']} --log-driver journald --log-level info --network none --privileged=False --security-opt label=disable --user root --volume /dev/log:/dev/log --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z --volume /var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init
Jan 05 20:55:32 compute-0 sudo[183565]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:33 compute-0 sudo[183791]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-felpfzwtogvmyuzcvyttyzwogdpvswpq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646533.0737028-1165-234490278556648/AnsiballZ_stat.py'
Jan 05 20:55:33 compute-0 sudo[183791]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:33 compute-0 python3.9[183793]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:55:33 compute-0 sudo[183791]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:34 compute-0 sudo[183945]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dicoefmerroqbdwhxssgezqeeptgnsbp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646534.4809573-1177-35678239957578/AnsiballZ_container_config_data.py'
Jan 05 20:55:34 compute-0 sudo[183945]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:35 compute-0 python3.9[183947]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/openstack/config/containers config_pattern=nova_compute.json debug=False
Jan 05 20:55:35 compute-0 sudo[183945]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:35 compute-0 sudo[184097]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrhsgbqsfllfoupbdillfzcnlgwvavra ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646535.4450037-1188-44869751020590/AnsiballZ_container_config_hash.py'
Jan 05 20:55:35 compute-0 sudo[184097]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:36 compute-0 python3.9[184099]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 20:55:36 compute-0 sudo[184097]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:36 compute-0 sudo[184249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nolbftodvciqeqrawhecpaduphfklhdb ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646536.4719796-1198-21082580843776/AnsiballZ_edpm_container_manage.py'
Jan 05 20:55:36 compute-0 sudo[184249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:37 compute-0 python3[184251]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/openstack/config/containers config_id=edpm config_overrides={} config_patterns=nova_compute.json containers=[] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 20:55:37 compute-0 podman[184287]: 2026-01-05 20:55:37.414302236 +0000 UTC m=+0.070021613 container create 43547f248f357ed0221745a91d4d00a584ad4442a428f998624117fe6fc5df85 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, org.label-schema.build-date=20251202, container_name=nova_compute, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=edpm, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 05 20:55:37 compute-0 podman[184287]: 2026-01-05 20:55:37.376586124 +0000 UTC m=+0.032305551 image pull e3166cc074f328e3b121ff82d56ed43a2542af699baffe6874520fe3837c2b18 quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified
Jan 05 20:55:37 compute-0 python3[184251]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name nova_compute --conmon-pidfile /run/nova_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --label config_id=edpm --label container_name=nova_compute --label managed_by=edpm_ansible --label config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']} --log-driver journald --log-level info --network host --pid host --privileged=True --user nova --volume /var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro --volume /var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /etc/localtime:/etc/localtime:ro --volume /lib/modules:/lib/modules:ro --volume /dev:/dev --volume /var/lib/libvirt:/var/lib/libvirt --volume /run/libvirt:/run/libvirt:shared --volume /var/lib/nova:/var/lib/nova:shared --volume /var/lib/iscsi:/var/lib/iscsi --volume /etc/multipath:/etc/multipath --volume /etc/multipath.conf:/etc/multipath.conf:ro,Z --volume /etc/iscsi:/etc/iscsi:ro --volume /etc/nvme:/etc/nvme --volume /var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro --volume /etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified kolla_start
Jan 05 20:55:37 compute-0 sudo[184249]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:38 compute-0 sudo[184475]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lpzlvlruocmcsgksvyxeubfzmhvibsbw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646537.8696797-1206-169871228252475/AnsiballZ_stat.py'
Jan 05 20:55:38 compute-0 sudo[184475]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:38 compute-0 python3.9[184477]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:55:38 compute-0 sudo[184475]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:39 compute-0 sudo[184629]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fpgmtuvhrnsfmlihhdztpnoecgcvshiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646538.818238-1215-244293408504426/AnsiballZ_file.py'
Jan 05 20:55:39 compute-0 sudo[184629]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:39 compute-0 python3.9[184631]: ansible-file Invoked with path=/etc/systemd/system/edpm_nova_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:55:39 compute-0 sudo[184629]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:40 compute-0 sudo[184780]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugmxlkhmivexiyxwibxcqxoqnjmojkmm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646539.7132418-1215-16530195688065/AnsiballZ_copy.py'
Jan 05 20:55:40 compute-0 sudo[184780]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:40 compute-0 python3.9[184782]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767646539.7132418-1215-16530195688065/source dest=/etc/systemd/system/edpm_nova_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:55:40 compute-0 sudo[184780]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:40 compute-0 sudo[184856]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhmaymioxdamwapmosvawywoeefostfw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646539.7132418-1215-16530195688065/AnsiballZ_systemd.py'
Jan 05 20:55:40 compute-0 sudo[184856]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:41 compute-0 python3.9[184858]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 20:55:41 compute-0 systemd[1]: Reloading.
Jan 05 20:55:41 compute-0 systemd-rc-local-generator[184885]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:55:41 compute-0 systemd-sysv-generator[184888]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:55:41 compute-0 sudo[184856]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:41 compute-0 sudo[184966]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-idqxlhrcpomfwgejifaflnuedshjgwws ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646539.7132418-1215-16530195688065/AnsiballZ_systemd.py'
Jan 05 20:55:41 compute-0 sudo[184966]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:42 compute-0 python3.9[184968]: ansible-systemd Invoked with state=restarted name=edpm_nova_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:55:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:55:42.821 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:55:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:55:42.823 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:55:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:55:42.823 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:55:43 compute-0 systemd[1]: Reloading.
Jan 05 20:55:43 compute-0 systemd-rc-local-generator[184993]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:55:43 compute-0 systemd-sysv-generator[184996]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:55:43 compute-0 systemd[1]: Starting nova_compute container...
Jan 05 20:55:43 compute-0 systemd[1]: Started libcrun container.
Jan 05 20:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2f86b48e5ac0f683fd871b7628f53c95a92bcf8563973fb035b096269a83a6/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 05 20:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2f86b48e5ac0f683fd871b7628f53c95a92bcf8563973fb035b096269a83a6/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 05 20:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2f86b48e5ac0f683fd871b7628f53c95a92bcf8563973fb035b096269a83a6/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 05 20:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2f86b48e5ac0f683fd871b7628f53c95a92bcf8563973fb035b096269a83a6/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 05 20:55:43 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2f86b48e5ac0f683fd871b7628f53c95a92bcf8563973fb035b096269a83a6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 05 20:55:43 compute-0 podman[185008]: 2026-01-05 20:55:43.666764155 +0000 UTC m=+0.134734537 container init 43547f248f357ed0221745a91d4d00a584ad4442a428f998624117fe6fc5df85 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=edpm, container_name=nova_compute, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']})
Jan 05 20:55:43 compute-0 podman[185008]: 2026-01-05 20:55:43.67886872 +0000 UTC m=+0.146839062 container start 43547f248f357ed0221745a91d4d00a584ad4442a428f998624117fe6fc5df85 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, config_id=edpm, container_name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 05 20:55:43 compute-0 podman[185008]: nova_compute
Jan 05 20:55:43 compute-0 nova_compute[185024]: + sudo -E kolla_set_configs
Jan 05 20:55:43 compute-0 systemd[1]: Started nova_compute container.
Jan 05 20:55:43 compute-0 sudo[184966]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Validating config file
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Copying service configuration files
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Deleting /etc/ceph
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Creating directory /etc/ceph
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Setting permission for /etc/ceph
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Writing out command to execute
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 05 20:55:43 compute-0 nova_compute[185024]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 05 20:55:43 compute-0 nova_compute[185024]: ++ cat /run_command
Jan 05 20:55:43 compute-0 nova_compute[185024]: + CMD=nova-compute
Jan 05 20:55:43 compute-0 nova_compute[185024]: + ARGS=
Jan 05 20:55:43 compute-0 nova_compute[185024]: + sudo kolla_copy_cacerts
Jan 05 20:55:43 compute-0 nova_compute[185024]: + [[ ! -n '' ]]
Jan 05 20:55:43 compute-0 nova_compute[185024]: + . kolla_extend_start
Jan 05 20:55:43 compute-0 nova_compute[185024]: + echo 'Running command: '\''nova-compute'\'''
Jan 05 20:55:43 compute-0 nova_compute[185024]: Running command: 'nova-compute'
Jan 05 20:55:43 compute-0 nova_compute[185024]: + umask 0022
Jan 05 20:55:43 compute-0 nova_compute[185024]: + exec nova-compute
Jan 05 20:55:44 compute-0 podman[185137]: 2026-01-05 20:55:44.747919917 +0000 UTC m=+0.104393888 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Jan 05 20:55:44 compute-0 python3.9[185214]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner_healthcheck.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:55:45 compute-0 python3.9[185364]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:55:45 compute-0 nova_compute[185024]: 2026-01-05 20:55:45.853 185028 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 05 20:55:45 compute-0 nova_compute[185024]: 2026-01-05 20:55:45.854 185028 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 05 20:55:45 compute-0 nova_compute[185024]: 2026-01-05 20:55:45.854 185028 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 05 20:55:45 compute-0 nova_compute[185024]: 2026-01-05 20:55:45.854 185028 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.019 185028 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.049 185028 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.050 185028 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 05 20:55:46 compute-0 python3.9[185518]: ansible-ansible.builtin.stat Invoked with path=/etc/systemd/system/edpm_nova_nvme_cleaner.service.requires follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.672 185028 INFO nova.virt.driver [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.805 185028 INFO nova.compute.provider_config [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.816 185028 DEBUG oslo_concurrency.lockutils [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.817 185028 DEBUG oslo_concurrency.lockutils [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.817 185028 DEBUG oslo_concurrency.lockutils [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.817 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.817 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.817 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.817 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.818 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.818 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.818 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.818 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.818 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.819 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.819 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.819 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.819 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.819 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.819 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.819 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.820 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.820 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.820 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.820 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.820 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.820 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.820 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.820 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.821 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.821 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.821 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.821 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.821 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.821 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.821 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.822 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.822 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.822 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.822 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.822 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.822 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.822 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.823 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.823 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.823 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.823 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.823 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.823 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.823 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.824 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.824 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.824 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.824 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.824 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.824 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.824 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.825 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.825 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.825 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.825 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.825 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.825 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.825 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.826 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.826 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.826 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.826 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.826 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.826 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.826 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.826 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.827 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.827 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.827 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.827 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.827 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.827 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.827 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.828 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.828 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.828 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.828 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.828 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.828 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.828 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.828 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.829 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.829 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.829 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.829 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.829 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.829 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.830 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.830 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.830 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.830 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.830 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.830 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.830 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.830 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.831 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.831 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.831 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.831 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.831 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.831 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.831 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.832 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.832 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.832 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.832 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.832 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.832 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.832 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.832 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.833 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.833 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.833 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.833 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.833 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.833 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.833 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.834 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.834 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.834 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.834 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.834 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.834 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.834 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.835 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.835 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.835 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.835 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.835 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.835 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.835 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.835 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.836 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.836 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.836 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.836 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.836 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.836 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.836 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.837 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.837 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.837 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.837 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.837 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.837 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.837 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.838 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.838 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.838 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.838 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.838 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.838 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.839 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.839 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.839 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.839 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.839 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.839 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.839 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.840 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.840 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.840 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.840 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.840 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.840 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.840 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.841 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.841 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.841 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.841 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.841 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.841 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.841 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.842 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.842 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.842 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.842 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.842 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.842 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.842 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.843 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.843 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.843 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.843 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.843 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.843 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.843 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.844 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.844 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.844 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.844 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.844 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.844 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.844 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.845 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.845 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.845 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.845 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.845 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.845 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.845 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.846 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.846 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.846 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.846 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.846 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.846 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.846 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.847 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.847 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.847 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.847 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.847 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.847 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.847 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.848 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.848 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.848 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.848 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.848 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.848 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.848 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.849 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.849 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.849 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.849 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.849 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.849 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.849 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.850 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.850 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.850 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.850 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.850 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.850 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.850 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.851 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.851 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.851 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.851 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.851 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.851 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.852 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.852 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.852 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.852 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.852 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.852 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.852 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.853 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.853 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.853 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.853 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.853 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.853 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.853 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.854 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.854 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.854 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.854 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.854 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.854 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.854 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.855 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.855 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.855 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.855 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.855 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.855 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.855 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.856 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.856 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.856 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.856 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.856 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.856 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.856 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.857 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.857 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.857 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.857 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.857 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.857 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.858 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.858 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.858 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.858 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.858 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.858 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.858 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.859 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.859 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.859 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.859 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.859 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.859 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.859 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.860 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.860 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.860 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.860 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.860 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.860 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.860 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.861 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.861 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.861 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.861 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.861 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.861 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.861 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.862 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.862 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.862 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.862 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.862 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.862 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.862 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.863 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.863 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.863 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.863 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.863 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.863 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.863 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.864 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.864 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.864 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.864 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.864 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.864 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.864 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.865 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.865 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.865 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.865 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.865 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.865 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.865 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.865 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.866 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.866 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.866 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.866 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.866 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.866 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.866 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.867 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.867 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.867 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.867 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.867 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.867 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.867 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.868 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.868 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.868 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.868 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.868 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.869 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.869 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.869 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.869 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.869 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.869 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.869 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.870 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.870 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.870 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.870 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.870 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.870 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.870 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.870 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.871 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.871 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.871 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.871 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.871 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.871 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.871 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.872 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.872 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.872 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.872 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.872 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.872 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.872 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.873 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.873 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.873 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.873 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.873 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.873 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.873 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.874 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.874 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.874 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.874 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.874 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.874 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.874 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.874 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.875 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.875 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.875 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.875 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.875 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.875 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.875 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.876 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.876 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.876 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.876 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.876 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.876 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.876 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.877 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.877 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.877 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.877 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.877 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.877 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.877 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.877 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.878 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.878 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.878 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.878 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.878 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.878 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.878 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.879 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.879 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.879 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.879 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.879 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.879 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.879 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.880 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.880 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.880 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.880 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.880 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.880 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.880 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.880 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.881 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.881 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.881 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.881 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.881 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.881 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.882 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.882 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.882 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.882 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.882 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.882 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.882 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.883 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.883 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.883 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.883 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.883 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.883 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.883 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.884 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.884 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.884 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.884 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.884 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.884 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.884 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.885 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.885 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.885 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.885 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.885 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.885 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.885 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.886 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.886 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.886 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.886 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.886 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.886 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.886 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.887 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.887 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.887 185028 WARNING oslo_config.cfg [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 05 20:55:46 compute-0 nova_compute[185024]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 05 20:55:46 compute-0 nova_compute[185024]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 05 20:55:46 compute-0 nova_compute[185024]: and ``live_migration_inbound_addr`` respectively.
Jan 05 20:55:46 compute-0 nova_compute[185024]: ).  Its value may be silently ignored in the future.
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.887 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.887 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.887 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.888 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.888 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.888 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.888 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.888 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.888 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.888 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.889 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.889 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.889 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.889 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.889 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.889 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.889 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.890 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.890 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.890 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.890 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.890 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.890 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.890 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.890 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.891 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.891 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.891 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.891 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.891 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.891 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.892 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.892 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.892 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.892 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.892 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.892 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.892 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.893 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.893 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.893 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.893 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.893 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.893 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.894 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.894 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.894 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.894 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.894 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.894 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.894 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.895 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.895 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.895 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.895 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.895 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.895 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.895 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.895 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.896 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.896 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.896 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.896 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.896 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.896 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.896 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.897 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.897 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.897 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.897 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.897 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.897 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.897 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.898 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.898 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.898 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.898 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.898 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.898 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.898 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.898 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.899 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.899 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.899 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.899 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.899 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.899 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.899 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.900 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.900 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.900 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.900 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.900 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.900 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.900 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.901 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.901 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.901 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.901 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.901 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.901 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.901 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.902 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.902 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.902 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.902 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.902 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.902 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.902 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.902 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.903 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.903 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.903 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.903 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.903 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.903 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.903 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.904 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.904 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.904 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.904 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.904 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.904 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.904 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.905 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.905 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.905 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.905 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.905 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.905 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.905 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.906 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.906 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.906 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.906 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.906 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.906 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.906 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.906 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.907 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.907 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.907 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.907 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.907 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.907 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.908 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.908 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.908 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.908 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.908 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.908 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.908 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.909 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.909 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.909 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.909 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.909 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.909 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.909 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.910 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.910 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.910 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.910 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.910 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.910 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.910 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.910 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.911 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.911 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.911 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.911 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.911 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.911 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.911 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.912 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.912 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.912 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.912 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.912 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.912 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.913 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.913 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.913 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.913 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.913 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.913 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.913 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.913 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.914 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.914 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.914 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.914 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.914 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.914 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.914 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.915 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.915 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.915 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.915 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.915 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.915 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.915 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.916 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.916 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.916 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.916 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.916 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.916 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.916 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.917 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.917 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.917 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.917 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.917 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.917 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.917 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.918 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.918 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.918 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.918 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.918 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.918 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.918 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.918 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.919 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.919 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.919 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.919 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.919 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.919 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.919 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.920 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.920 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.920 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.920 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.920 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.920 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.920 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.921 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.921 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.921 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.921 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.921 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.921 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.921 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.922 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.922 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.922 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.922 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.922 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.922 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.923 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.923 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.923 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.923 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.923 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.923 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.923 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.924 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.924 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.924 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.924 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.924 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.924 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.924 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.925 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.925 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.925 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.925 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.925 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.925 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.925 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.926 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.926 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.926 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.926 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.926 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.926 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.926 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.926 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.927 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.927 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.927 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.927 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.927 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.927 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.927 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.928 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.928 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.928 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.928 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.928 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.928 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.928 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.929 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.929 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.929 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.929 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.929 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.929 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.929 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.930 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.930 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.930 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.930 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.930 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.930 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.930 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.931 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.931 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.931 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.931 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.931 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.931 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.931 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.932 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.932 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.932 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.932 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.932 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.932 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.932 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.933 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.933 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.933 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.933 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.933 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.933 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.933 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.934 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.934 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.934 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.934 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.934 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.934 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.934 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.935 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.935 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.935 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.935 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.935 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.935 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.935 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.936 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.936 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.936 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.936 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.936 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.936 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.936 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.936 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.937 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.937 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.937 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.937 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.937 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.937 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.937 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.938 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.938 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.938 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.938 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.938 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.938 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.938 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.939 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.939 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.939 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.939 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.939 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.939 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.939 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.939 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.940 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.940 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.940 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.940 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.940 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.940 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.941 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.941 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.941 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.941 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.941 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.941 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.941 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.942 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.942 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.942 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.942 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.942 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.942 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.942 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.943 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.943 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.943 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.943 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.943 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.943 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.943 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.944 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.944 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.944 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.944 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.944 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.944 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.945 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.945 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.945 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.945 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.945 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.945 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.945 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.946 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.946 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.946 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.946 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.946 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.946 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.946 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.946 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.947 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.947 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.947 185028 DEBUG oslo_service.service [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.948 185028 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.962 185028 DEBUG nova.virt.libvirt.host [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.963 185028 DEBUG nova.virt.libvirt.host [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.963 185028 DEBUG nova.virt.libvirt.host [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 05 20:55:46 compute-0 nova_compute[185024]: 2026-01-05 20:55:46.963 185028 DEBUG nova.virt.libvirt.host [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 05 20:55:46 compute-0 systemd[1]: Starting libvirt QEMU daemon...
Jan 05 20:55:47 compute-0 systemd[1]: Started libvirt QEMU daemon.
Jan 05 20:55:47 compute-0 nova_compute[185024]: 2026-01-05 20:55:47.028 185028 DEBUG nova.virt.libvirt.host [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7fd82682bdf0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 05 20:55:47 compute-0 nova_compute[185024]: 2026-01-05 20:55:47.031 185028 DEBUG nova.virt.libvirt.host [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7fd82682bdf0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 05 20:55:47 compute-0 nova_compute[185024]: 2026-01-05 20:55:47.031 185028 INFO nova.virt.libvirt.driver [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Connection event '1' reason 'None'
Jan 05 20:55:47 compute-0 nova_compute[185024]: 2026-01-05 20:55:47.052 185028 WARNING nova.virt.libvirt.driver [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Cannot update service status on host "compute-0.ctlplane.example.com" since it is not registered.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 05 20:55:47 compute-0 nova_compute[185024]: 2026-01-05 20:55:47.052 185028 DEBUG nova.virt.libvirt.volume.mount [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 05 20:55:47 compute-0 sudo[185720]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dpmyffslvtxpyuhcqnqocvcgrpjrzpdd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646546.7827256-1275-146289882602270/AnsiballZ_podman_container.py'
Jan 05 20:55:47 compute-0 sudo[185720]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:47 compute-0 python3.9[185722]: ansible-containers.podman.podman_container Invoked with name=nova_nvme_cleaner state=absent executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 05 20:55:47 compute-0 sudo[185720]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:47 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 20:55:47 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.038 185028 INFO nova.virt.libvirt.host [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Libvirt host capabilities <capabilities>
Jan 05 20:55:48 compute-0 nova_compute[185024]: 
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <host>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <uuid>103e5390-173f-4d3f-9983-22472b3a8bf4</uuid>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <cpu>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <arch>x86_64</arch>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model>EPYC-Rome-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <vendor>AMD</vendor>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <microcode version='16777317'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <signature family='23' model='49' stepping='0'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='x2apic'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='tsc-deadline'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='osxsave'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='hypervisor'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='tsc_adjust'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='spec-ctrl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='stibp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='arch-capabilities'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='ssbd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='cmp_legacy'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='topoext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='virt-ssbd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='lbrv'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='tsc-scale'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='vmcb-clean'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='pause-filter'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='pfthreshold'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='svme-addr-chk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='rdctl-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='skip-l1dfl-vmentry'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='mds-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature name='pschange-mc-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <pages unit='KiB' size='4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <pages unit='KiB' size='2048'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <pages unit='KiB' size='1048576'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </cpu>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <power_management>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <suspend_mem/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <suspend_disk/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <suspend_hybrid/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </power_management>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <iommu support='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <migration_features>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <live/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <uri_transports>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <uri_transport>tcp</uri_transport>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <uri_transport>rdma</uri_transport>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </uri_transports>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </migration_features>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <topology>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <cells num='1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <cell id='0'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:           <memory unit='KiB'>7864312</memory>
Jan 05 20:55:48 compute-0 nova_compute[185024]:           <pages unit='KiB' size='4'>1966078</pages>
Jan 05 20:55:48 compute-0 nova_compute[185024]:           <pages unit='KiB' size='2048'>0</pages>
Jan 05 20:55:48 compute-0 nova_compute[185024]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 05 20:55:48 compute-0 nova_compute[185024]:           <distances>
Jan 05 20:55:48 compute-0 nova_compute[185024]:             <sibling id='0' value='10'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:           </distances>
Jan 05 20:55:48 compute-0 nova_compute[185024]:           <cpus num='8'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:           </cpus>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         </cell>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </cells>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </topology>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <cache>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </cache>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <secmodel>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model>selinux</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <doi>0</doi>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </secmodel>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <secmodel>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model>dac</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <doi>0</doi>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </secmodel>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </host>
Jan 05 20:55:48 compute-0 nova_compute[185024]: 
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <guest>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <os_type>hvm</os_type>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <arch name='i686'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <wordsize>32</wordsize>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <domain type='qemu'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <domain type='kvm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </arch>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <features>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <pae/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <nonpae/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <acpi default='on' toggle='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <apic default='on' toggle='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <cpuselection/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <deviceboot/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <disksnapshot default='on' toggle='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <externalSnapshot/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </features>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </guest>
Jan 05 20:55:48 compute-0 nova_compute[185024]: 
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <guest>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <os_type>hvm</os_type>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <arch name='x86_64'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <wordsize>64</wordsize>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <domain type='qemu'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <domain type='kvm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </arch>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <features>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <acpi default='on' toggle='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <apic default='on' toggle='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <cpuselection/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <deviceboot/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <disksnapshot default='on' toggle='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <externalSnapshot/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </features>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </guest>
Jan 05 20:55:48 compute-0 nova_compute[185024]: 
Jan 05 20:55:48 compute-0 nova_compute[185024]: </capabilities>
Jan 05 20:55:48 compute-0 nova_compute[185024]: 
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.047 185028 DEBUG nova.virt.libvirt.host [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Getting domain capabilities for i686 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.073 185028 DEBUG nova.virt.libvirt.host [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 05 20:55:48 compute-0 nova_compute[185024]: <domainCapabilities>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <path>/usr/libexec/qemu-kvm</path>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <domain>kvm</domain>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <arch>i686</arch>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <vcpu max='4096'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <iothreads supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <os supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <enum name='firmware'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <loader supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>rom</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pflash</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='readonly'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>yes</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>no</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='secure'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>no</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </loader>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </os>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <cpu>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <mode name='host-passthrough' supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='hostPassthroughMigratable'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>on</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>off</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </mode>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <mode name='maximum' supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='maximumMigratable'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>on</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>off</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </mode>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <mode name='host-model' supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <vendor>AMD</vendor>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='x2apic'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='tsc-deadline'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='hypervisor'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='tsc_adjust'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='spec-ctrl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='stibp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='ssbd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='cmp_legacy'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='overflow-recov'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='succor'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='ibrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='amd-ssbd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='virt-ssbd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='lbrv'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='tsc-scale'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='vmcb-clean'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='flushbyasid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='pause-filter'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='pfthreshold'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='svme-addr-chk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='disable' name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </mode>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <mode name='custom' supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-noTSX'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v5'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cooperlake'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cooperlake-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cooperlake-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Denverton'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mpx'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Denverton-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mpx'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Denverton-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Denverton-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Dhyana-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Genoa'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amd-psfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='auto-ibrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='stibp-always-on'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Genoa-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amd-psfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='auto-ibrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='stibp-always-on'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Milan'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Milan-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Milan-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amd-psfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='stibp-always-on'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Rome'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Rome-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Rome-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Rome-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='GraniteRapids'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='prefetchiti'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='GraniteRapids-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='prefetchiti'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='GraniteRapids-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx10'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx10-128'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx10-256'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx10-512'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='prefetchiti'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-noTSX'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-noTSX'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v5'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v6'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v7'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='IvyBridge'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='IvyBridge-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='IvyBridge-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='IvyBridge-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='KnightsMill'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-4fmaps'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-4vnniw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512er'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512pf'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='KnightsMill-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-4fmaps'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-4vnniw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512er'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512pf'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Opteron_G4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fma4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xop'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Opteron_G4-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fma4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xop'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Opteron_G5'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fma4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tbm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xop'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Opteron_G5-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fma4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tbm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xop'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SapphireRapids'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SapphireRapids-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SapphireRapids-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SapphireRapids-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SierraForest'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-ne-convert'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cmpccxadd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SierraForest-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-ne-convert'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cmpccxadd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v5'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='core-capability'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mpx'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='split-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='core-capability'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mpx'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='split-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='core-capability'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='split-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='core-capability'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='split-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='athlon'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnow'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnowext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='athlon-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnow'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnowext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='core2duo'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='core2duo-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='coreduo'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='coreduo-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='n270'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='n270-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='phenom'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnow'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnowext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='phenom-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnow'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnowext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </mode>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </cpu>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <memoryBacking supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <enum name='sourceType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>file</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>anonymous</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>memfd</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </memoryBacking>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <devices>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <disk supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='diskDevice'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>disk</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>cdrom</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>floppy</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>lun</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='bus'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>fdc</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>scsi</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>usb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>sata</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio-transitional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio-non-transitional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </disk>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <graphics supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vnc</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>egl-headless</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>dbus</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </graphics>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <video supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='modelType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vga</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>cirrus</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>none</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>bochs</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>ramfb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </video>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <hostdev supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='mode'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>subsystem</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='startupPolicy'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>default</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>mandatory</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>requisite</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>optional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='subsysType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>usb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pci</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>scsi</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='capsType'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='pciBackend'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </hostdev>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <rng supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio-transitional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio-non-transitional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendModel'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>random</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>egd</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>builtin</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </rng>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <filesystem supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='driverType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>path</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>handle</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtiofs</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </filesystem>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <tpm supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tpm-tis</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tpm-crb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendModel'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>emulator</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>external</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendVersion'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>2.0</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </tpm>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <redirdev supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='bus'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>usb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </redirdev>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <channel supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pty</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>unix</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </channel>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <crypto supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>qemu</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendModel'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>builtin</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </crypto>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <interface supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>default</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>passt</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </interface>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <panic supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>isa</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>hyperv</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </panic>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <console supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>null</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vc</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pty</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>dev</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>file</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pipe</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>stdio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>udp</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tcp</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>unix</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>qemu-vdagent</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>dbus</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </console>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </devices>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <features>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <gic supported='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <vmcoreinfo supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <genid supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <backingStoreInput supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <backup supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <async-teardown supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <ps2 supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <sev supported='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <sgx supported='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <hyperv supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='features'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>relaxed</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vapic</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>spinlocks</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vpindex</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>runtime</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>synic</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>stimer</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>reset</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vendor_id</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>frequencies</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>reenlightenment</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tlbflush</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>ipi</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>avic</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>emsr_bitmap</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>xmm_input</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <defaults>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <spinlocks>4095</spinlocks>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <stimer_direct>on</stimer_direct>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <tlbflush_direct>on</tlbflush_direct>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <tlbflush_extended>on</tlbflush_extended>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </defaults>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </hyperv>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <launchSecurity supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='sectype'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tdx</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </launchSecurity>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </features>
Jan 05 20:55:48 compute-0 nova_compute[185024]: </domainCapabilities>
Jan 05 20:55:48 compute-0 nova_compute[185024]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.084 185028 DEBUG nova.virt.libvirt.host [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 05 20:55:48 compute-0 nova_compute[185024]: <domainCapabilities>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <path>/usr/libexec/qemu-kvm</path>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <domain>kvm</domain>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <arch>i686</arch>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <vcpu max='240'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <iothreads supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <os supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <enum name='firmware'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <loader supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>rom</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pflash</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='readonly'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>yes</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>no</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='secure'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>no</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </loader>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </os>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <cpu>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <mode name='host-passthrough' supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='hostPassthroughMigratable'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>on</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>off</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </mode>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <mode name='maximum' supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='maximumMigratable'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>on</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>off</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </mode>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <mode name='host-model' supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <vendor>AMD</vendor>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='x2apic'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='tsc-deadline'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='hypervisor'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='tsc_adjust'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='spec-ctrl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='stibp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='ssbd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='cmp_legacy'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='overflow-recov'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='succor'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='ibrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='amd-ssbd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='virt-ssbd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='lbrv'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='tsc-scale'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='vmcb-clean'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='flushbyasid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='pause-filter'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='pfthreshold'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='svme-addr-chk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='disable' name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </mode>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <mode name='custom' supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-noTSX'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v5'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cooperlake'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cooperlake-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cooperlake-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Denverton'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mpx'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Denverton-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mpx'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Denverton-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Denverton-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Dhyana-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Genoa'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amd-psfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='auto-ibrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='stibp-always-on'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Genoa-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amd-psfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='auto-ibrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='stibp-always-on'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Milan'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Milan-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Milan-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amd-psfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='stibp-always-on'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Rome'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Rome-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Rome-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Rome-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='GraniteRapids'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='prefetchiti'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='GraniteRapids-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='prefetchiti'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='GraniteRapids-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx10'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx10-128'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx10-256'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx10-512'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='prefetchiti'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-noTSX'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-noTSX'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v5'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v6'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v7'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='IvyBridge'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='IvyBridge-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='IvyBridge-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='IvyBridge-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='KnightsMill'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-4fmaps'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-4vnniw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512er'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512pf'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='KnightsMill-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-4fmaps'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-4vnniw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512er'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512pf'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Opteron_G4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fma4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xop'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Opteron_G4-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fma4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xop'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Opteron_G5'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fma4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tbm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xop'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Opteron_G5-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fma4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tbm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xop'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SapphireRapids'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SapphireRapids-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SapphireRapids-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SapphireRapids-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SierraForest'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-ne-convert'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cmpccxadd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SierraForest-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-ne-convert'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cmpccxadd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v5'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='core-capability'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mpx'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='split-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='core-capability'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mpx'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='split-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='core-capability'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='split-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='core-capability'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='split-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='athlon'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnow'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnowext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='athlon-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnow'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnowext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='core2duo'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='core2duo-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='coreduo'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='coreduo-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='n270'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='n270-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='phenom'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnow'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnowext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='phenom-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnow'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnowext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </mode>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </cpu>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <memoryBacking supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <enum name='sourceType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>file</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>anonymous</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>memfd</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </memoryBacking>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <devices>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <disk supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='diskDevice'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>disk</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>cdrom</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>floppy</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>lun</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='bus'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>ide</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>fdc</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>scsi</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>usb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>sata</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio-transitional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio-non-transitional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </disk>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <graphics supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vnc</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>egl-headless</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>dbus</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </graphics>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <video supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='modelType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vga</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>cirrus</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>none</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>bochs</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>ramfb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </video>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <hostdev supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='mode'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>subsystem</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='startupPolicy'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>default</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>mandatory</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>requisite</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>optional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='subsysType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>usb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pci</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>scsi</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='capsType'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='pciBackend'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </hostdev>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <rng supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio-transitional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio-non-transitional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendModel'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>random</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>egd</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>builtin</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </rng>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <filesystem supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='driverType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>path</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>handle</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtiofs</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </filesystem>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <tpm supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tpm-tis</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tpm-crb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendModel'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>emulator</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>external</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendVersion'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>2.0</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </tpm>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <redirdev supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='bus'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>usb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </redirdev>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <channel supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pty</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>unix</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </channel>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <crypto supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>qemu</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendModel'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>builtin</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </crypto>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <interface supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>default</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>passt</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </interface>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <panic supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>isa</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>hyperv</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </panic>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <console supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>null</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vc</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pty</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>dev</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>file</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pipe</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>stdio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>udp</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tcp</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>unix</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>qemu-vdagent</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>dbus</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </console>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </devices>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <features>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <gic supported='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <vmcoreinfo supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <genid supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <backingStoreInput supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <backup supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <async-teardown supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <ps2 supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <sev supported='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <sgx supported='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <hyperv supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='features'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>relaxed</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vapic</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>spinlocks</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vpindex</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>runtime</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>synic</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>stimer</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>reset</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vendor_id</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>frequencies</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>reenlightenment</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tlbflush</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>ipi</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>avic</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>emsr_bitmap</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>xmm_input</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <defaults>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <spinlocks>4095</spinlocks>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <stimer_direct>on</stimer_direct>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <tlbflush_direct>on</tlbflush_direct>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <tlbflush_extended>on</tlbflush_extended>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </defaults>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </hyperv>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <launchSecurity supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='sectype'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tdx</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </launchSecurity>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </features>
Jan 05 20:55:48 compute-0 nova_compute[185024]: </domainCapabilities>
Jan 05 20:55:48 compute-0 nova_compute[185024]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.128 185028 DEBUG nova.virt.libvirt.host [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'q35', 'pc'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.136 185028 DEBUG nova.virt.libvirt.host [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 05 20:55:48 compute-0 nova_compute[185024]: <domainCapabilities>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <path>/usr/libexec/qemu-kvm</path>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <domain>kvm</domain>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <arch>x86_64</arch>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <vcpu max='4096'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <iothreads supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <os supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <enum name='firmware'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>efi</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <loader supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>rom</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pflash</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='readonly'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>yes</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>no</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='secure'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>yes</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>no</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </loader>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </os>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <cpu>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <mode name='host-passthrough' supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='hostPassthroughMigratable'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>on</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>off</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </mode>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <mode name='maximum' supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='maximumMigratable'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>on</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>off</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </mode>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <mode name='host-model' supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <vendor>AMD</vendor>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='x2apic'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='tsc-deadline'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='hypervisor'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='tsc_adjust'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='spec-ctrl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='stibp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='ssbd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='cmp_legacy'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='overflow-recov'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='succor'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='ibrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='amd-ssbd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='virt-ssbd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='lbrv'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='tsc-scale'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='vmcb-clean'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='flushbyasid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='pause-filter'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='pfthreshold'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='svme-addr-chk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='disable' name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </mode>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <mode name='custom' supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-noTSX'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v5'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cooperlake'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cooperlake-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cooperlake-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Denverton'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mpx'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Denverton-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mpx'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Denverton-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Denverton-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Dhyana-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Genoa'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amd-psfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='auto-ibrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='stibp-always-on'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Genoa-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amd-psfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='auto-ibrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='stibp-always-on'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Milan'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Milan-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Milan-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amd-psfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='stibp-always-on'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Rome'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Rome-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Rome-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Rome-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='GraniteRapids'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='prefetchiti'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='GraniteRapids-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='prefetchiti'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='GraniteRapids-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx10'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx10-128'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx10-256'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx10-512'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='prefetchiti'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-noTSX'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-noTSX'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v5'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v6'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v7'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='IvyBridge'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='IvyBridge-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='IvyBridge-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='IvyBridge-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='KnightsMill'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-4fmaps'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-4vnniw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512er'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512pf'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='KnightsMill-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-4fmaps'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-4vnniw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512er'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512pf'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Opteron_G4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fma4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xop'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Opteron_G4-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fma4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xop'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Opteron_G5'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fma4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tbm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xop'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Opteron_G5-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fma4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tbm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xop'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SapphireRapids'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SapphireRapids-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SapphireRapids-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SapphireRapids-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SierraForest'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-ne-convert'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cmpccxadd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SierraForest-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-ne-convert'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cmpccxadd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v5'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='core-capability'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mpx'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='split-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='core-capability'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mpx'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='split-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='core-capability'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='split-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='core-capability'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='split-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='athlon'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnow'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnowext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='athlon-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnow'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnowext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='core2duo'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='core2duo-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='coreduo'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='coreduo-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='n270'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='n270-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='phenom'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnow'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnowext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='phenom-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnow'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnowext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </mode>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </cpu>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <memoryBacking supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <enum name='sourceType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>file</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>anonymous</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>memfd</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </memoryBacking>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <devices>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <disk supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='diskDevice'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>disk</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>cdrom</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>floppy</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>lun</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='bus'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>fdc</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>scsi</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>usb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>sata</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio-transitional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio-non-transitional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </disk>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <graphics supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vnc</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>egl-headless</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>dbus</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </graphics>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <video supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='modelType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vga</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>cirrus</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>none</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>bochs</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>ramfb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </video>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <hostdev supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='mode'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>subsystem</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='startupPolicy'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>default</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>mandatory</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>requisite</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>optional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='subsysType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>usb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pci</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>scsi</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='capsType'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='pciBackend'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </hostdev>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <rng supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio-transitional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio-non-transitional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendModel'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>random</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>egd</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>builtin</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </rng>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <filesystem supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='driverType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>path</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>handle</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtiofs</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </filesystem>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <tpm supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tpm-tis</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tpm-crb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendModel'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>emulator</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>external</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendVersion'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>2.0</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </tpm>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <redirdev supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='bus'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>usb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </redirdev>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <channel supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pty</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>unix</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </channel>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <crypto supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>qemu</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendModel'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>builtin</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </crypto>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <interface supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>default</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>passt</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </interface>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <panic supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>isa</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>hyperv</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </panic>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <console supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>null</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vc</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pty</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>dev</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>file</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pipe</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>stdio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>udp</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tcp</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>unix</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>qemu-vdagent</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>dbus</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </console>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </devices>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <features>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <gic supported='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <vmcoreinfo supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <genid supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <backingStoreInput supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <backup supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <async-teardown supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <ps2 supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <sev supported='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <sgx supported='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <hyperv supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='features'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>relaxed</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vapic</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>spinlocks</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vpindex</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>runtime</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>synic</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>stimer</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>reset</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vendor_id</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>frequencies</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>reenlightenment</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tlbflush</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>ipi</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>avic</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>emsr_bitmap</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>xmm_input</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <defaults>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <spinlocks>4095</spinlocks>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <stimer_direct>on</stimer_direct>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <tlbflush_direct>on</tlbflush_direct>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <tlbflush_extended>on</tlbflush_extended>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </defaults>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </hyperv>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <launchSecurity supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='sectype'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tdx</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </launchSecurity>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </features>
Jan 05 20:55:48 compute-0 nova_compute[185024]: </domainCapabilities>
Jan 05 20:55:48 compute-0 nova_compute[185024]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.207 185028 DEBUG nova.virt.libvirt.host [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 05 20:55:48 compute-0 nova_compute[185024]: <domainCapabilities>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <path>/usr/libexec/qemu-kvm</path>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <domain>kvm</domain>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <arch>x86_64</arch>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <vcpu max='240'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <iothreads supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <os supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <enum name='firmware'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <loader supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>rom</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pflash</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='readonly'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>yes</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>no</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='secure'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>no</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </loader>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </os>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <cpu>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <mode name='host-passthrough' supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='hostPassthroughMigratable'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>on</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>off</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </mode>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <mode name='maximum' supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='maximumMigratable'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>on</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>off</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </mode>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <mode name='host-model' supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <vendor>AMD</vendor>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='x2apic'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='tsc-deadline'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='hypervisor'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='tsc_adjust'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='spec-ctrl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='stibp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='ssbd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='cmp_legacy'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='overflow-recov'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='succor'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='ibrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='amd-ssbd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='virt-ssbd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='lbrv'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='tsc-scale'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='vmcb-clean'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='flushbyasid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='pause-filter'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='pfthreshold'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='svme-addr-chk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <feature policy='disable' name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </mode>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <mode name='custom' supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-noTSX'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Broadwell-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cascadelake-Server-v5'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cooperlake'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cooperlake-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Cooperlake-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Denverton'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mpx'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Denverton-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mpx'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Denverton-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Denverton-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Dhyana-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Genoa'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amd-psfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='auto-ibrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='stibp-always-on'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Genoa-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amd-psfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='auto-ibrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='stibp-always-on'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Milan'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Milan-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Milan-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amd-psfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='stibp-always-on'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Rome'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Rome-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Rome-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-Rome-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='EPYC-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='GraniteRapids'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='prefetchiti'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='GraniteRapids-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='prefetchiti'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='GraniteRapids-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx10'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx10-128'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx10-256'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx10-512'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='prefetchiti'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-noTSX'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Haswell-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-noTSX'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v5'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v6'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Icelake-Server-v7'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='IvyBridge'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='IvyBridge-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='IvyBridge-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='IvyBridge-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='KnightsMill'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-4fmaps'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-4vnniw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512er'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512pf'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='KnightsMill-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-4fmaps'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-4vnniw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512er'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512pf'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Opteron_G4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fma4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xop'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Opteron_G4-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fma4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xop'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Opteron_G5'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fma4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tbm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xop'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Opteron_G5-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fma4'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tbm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xop'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SapphireRapids'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SapphireRapids-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SapphireRapids-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SapphireRapids-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='amx-tile'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-bf16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-fp16'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bitalg'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrc'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fzrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='la57'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='taa-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xfd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SierraForest'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-ne-convert'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cmpccxadd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='SierraForest-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-ifma'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-ne-convert'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx-vnni-int8'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cmpccxadd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fbsdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='fsrs'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ibrs-all'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mcdt-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pbrsb-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='psdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='serialize'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vaes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Client-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='hle'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='rtm'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Skylake-Server-v5'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512bw'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512cd'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512dq'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512f'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='avx512vl'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='invpcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pcid'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='pku'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='core-capability'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mpx'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='split-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='core-capability'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='mpx'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='split-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge-v2'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='core-capability'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='split-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge-v3'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='core-capability'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='split-lock-detect'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='Snowridge-v4'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='cldemote'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='erms'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='gfni'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdir64b'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='movdiri'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='xsaves'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='athlon'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnow'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnowext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='athlon-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnow'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnowext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='core2duo'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='core2duo-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='coreduo'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='coreduo-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='n270'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='n270-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='ss'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='phenom'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnow'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnowext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <blockers model='phenom-v1'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnow'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <feature name='3dnowext'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </blockers>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </mode>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </cpu>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <memoryBacking supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <enum name='sourceType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>file</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>anonymous</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <value>memfd</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </memoryBacking>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <devices>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <disk supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='diskDevice'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>disk</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>cdrom</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>floppy</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>lun</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='bus'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>ide</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>fdc</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>scsi</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>usb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>sata</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio-transitional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio-non-transitional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </disk>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <graphics supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vnc</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>egl-headless</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>dbus</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </graphics>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <video supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='modelType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vga</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>cirrus</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>none</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>bochs</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>ramfb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </video>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <hostdev supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='mode'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>subsystem</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='startupPolicy'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>default</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>mandatory</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>requisite</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>optional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='subsysType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>usb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pci</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>scsi</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='capsType'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='pciBackend'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </hostdev>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <rng supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio-transitional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtio-non-transitional</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendModel'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>random</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>egd</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>builtin</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </rng>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <filesystem supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='driverType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>path</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>handle</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>virtiofs</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </filesystem>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <tpm supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tpm-tis</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tpm-crb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendModel'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>emulator</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>external</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendVersion'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>2.0</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </tpm>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <redirdev supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='bus'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>usb</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </redirdev>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <channel supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pty</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>unix</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </channel>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <crypto supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>qemu</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendModel'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>builtin</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </crypto>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <interface supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='backendType'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>default</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>passt</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </interface>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <panic supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='model'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>isa</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>hyperv</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </panic>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <console supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='type'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>null</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vc</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pty</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>dev</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>file</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>pipe</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>stdio</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>udp</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tcp</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>unix</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>qemu-vdagent</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>dbus</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </console>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </devices>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   <features>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <gic supported='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <vmcoreinfo supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <genid supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <backingStoreInput supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <backup supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <async-teardown supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <ps2 supported='yes'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <sev supported='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <sgx supported='no'/>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <hyperv supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='features'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>relaxed</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vapic</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>spinlocks</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vpindex</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>runtime</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>synic</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>stimer</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>reset</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>vendor_id</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>frequencies</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>reenlightenment</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tlbflush</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>ipi</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>avic</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>emsr_bitmap</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>xmm_input</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <defaults>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <spinlocks>4095</spinlocks>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <stimer_direct>on</stimer_direct>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <tlbflush_direct>on</tlbflush_direct>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <tlbflush_extended>on</tlbflush_extended>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </defaults>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </hyperv>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     <launchSecurity supported='yes'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       <enum name='sectype'>
Jan 05 20:55:48 compute-0 nova_compute[185024]:         <value>tdx</value>
Jan 05 20:55:48 compute-0 nova_compute[185024]:       </enum>
Jan 05 20:55:48 compute-0 nova_compute[185024]:     </launchSecurity>
Jan 05 20:55:48 compute-0 nova_compute[185024]:   </features>
Jan 05 20:55:48 compute-0 nova_compute[185024]: </domainCapabilities>
Jan 05 20:55:48 compute-0 nova_compute[185024]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.283 185028 DEBUG nova.virt.libvirt.host [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.284 185028 INFO nova.virt.libvirt.host [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Secure Boot support detected
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.286 185028 INFO nova.virt.libvirt.driver [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.287 185028 INFO nova.virt.libvirt.driver [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.303 185028 DEBUG nova.virt.libvirt.driver [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.457 185028 INFO nova.virt.node [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Determined node identity 98d67ab0-e613-4c26-9eaa-22cf91b060a7 from /var/lib/nova/compute_id
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.484 185028 WARNING nova.compute.manager [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Compute nodes ['98d67ab0-e613-4c26-9eaa-22cf91b060a7'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.546 185028 INFO nova.compute.manager [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 05 20:55:48 compute-0 sudo[185909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gapxpkfmtbrlbqfbxphjrvbvlolvzsrm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646548.0550687-1283-202665290672047/AnsiballZ_systemd.py'
Jan 05 20:55:48 compute-0 sudo[185909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.594 185028 WARNING nova.compute.manager [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.595 185028 DEBUG oslo_concurrency.lockutils [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.596 185028 DEBUG oslo_concurrency.lockutils [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.596 185028 DEBUG oslo_concurrency.lockutils [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:55:48 compute-0 nova_compute[185024]: 2026-01-05 20:55:48.597 185028 DEBUG nova.compute.resource_tracker [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 20:55:48 compute-0 systemd[1]: Starting libvirt nodedev daemon...
Jan 05 20:55:48 compute-0 systemd[1]: Started libvirt nodedev daemon.
Jan 05 20:55:48 compute-0 python3.9[185911]: ansible-ansible.builtin.systemd Invoked with name=edpm_nova_compute.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 20:55:48 compute-0 systemd[1]: Stopping nova_compute container...
Jan 05 20:55:49 compute-0 nova_compute[185024]: 2026-01-05 20:55:49.038 185028 WARNING nova.virt.libvirt.driver [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 20:55:49 compute-0 nova_compute[185024]: 2026-01-05 20:55:49.040 185028 DEBUG nova.compute.resource_tracker [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6065MB free_disk=72.64886474609375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 20:55:49 compute-0 nova_compute[185024]: 2026-01-05 20:55:49.040 185028 DEBUG oslo_concurrency.lockutils [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:55:49 compute-0 nova_compute[185024]: 2026-01-05 20:55:49.040 185028 DEBUG oslo_concurrency.lockutils [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:55:49 compute-0 nova_compute[185024]: 2026-01-05 20:55:49.064 185028 DEBUG oslo_concurrency.lockutils [None req-a893a627-10af-451d-bc17-5da1f0d1f9a5 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.024s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:55:49 compute-0 nova_compute[185024]: 2026-01-05 20:55:49.065 185028 DEBUG oslo_concurrency.lockutils [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 20:55:49 compute-0 nova_compute[185024]: 2026-01-05 20:55:49.065 185028 DEBUG oslo_concurrency.lockutils [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 20:55:49 compute-0 nova_compute[185024]: 2026-01-05 20:55:49.066 185028 DEBUG oslo_concurrency.lockutils [None req-7b3111d5-7679-4131-a904-5167176b9ca2 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 20:55:49 compute-0 nova_compute[185024]: 2026-01-05 20:55:49.070 185028 INFO oslo_messaging._drivers.amqpdriver [-] No calling threads waiting for msg_id : 797b91501c6f455a83eadc02e34e097f
Jan 05 20:55:49 compute-0 virtqemud[185616]: libvirt version: 11.9.0, package: 1.el9 (builder@centos.org, 2025-11-04-09:54:50, )
Jan 05 20:55:49 compute-0 virtqemud[185616]: hostname: compute-0
Jan 05 20:55:49 compute-0 virtqemud[185616]: End of file while reading data: Input/output error
Jan 05 20:55:49 compute-0 podman[185938]: 2026-01-05 20:55:49.628093108 +0000 UTC m=+0.640829596 container died 43547f248f357ed0221745a91d4d00a584ad4442a428f998624117fe6fc5df85 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=edpm, container_name=nova_compute, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 05 20:55:49 compute-0 systemd[1]: libpod-43547f248f357ed0221745a91d4d00a584ad4442a428f998624117fe6fc5df85.scope: Deactivated successfully.
Jan 05 20:55:49 compute-0 systemd[1]: libpod-43547f248f357ed0221745a91d4d00a584ad4442a428f998624117fe6fc5df85.scope: Consumed 3.525s CPU time.
Jan 05 20:55:49 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-43547f248f357ed0221745a91d4d00a584ad4442a428f998624117fe6fc5df85-userdata-shm.mount: Deactivated successfully.
Jan 05 20:55:49 compute-0 systemd[1]: var-lib-containers-storage-overlay-ff2f86b48e5ac0f683fd871b7628f53c95a92bcf8563973fb035b096269a83a6-merged.mount: Deactivated successfully.
Jan 05 20:55:49 compute-0 podman[185938]: 2026-01-05 20:55:49.726487018 +0000 UTC m=+0.739223496 container cleanup 43547f248f357ed0221745a91d4d00a584ad4442a428f998624117fe6fc5df85 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, io.buildah.version=1.41.3, config_id=edpm, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=nova_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 20:55:49 compute-0 podman[185938]: nova_compute
Jan 05 20:55:49 compute-0 podman[185953]: 2026-01-05 20:55:49.73500817 +0000 UTC m=+0.082483197 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 05 20:55:49 compute-0 podman[185989]: nova_compute
Jan 05 20:55:49 compute-0 systemd[1]: edpm_nova_compute.service: Deactivated successfully.
Jan 05 20:55:49 compute-0 systemd[1]: Stopped nova_compute container.
Jan 05 20:55:49 compute-0 systemd[1]: Starting nova_compute container...
Jan 05 20:55:50 compute-0 systemd[1]: Started libcrun container.
Jan 05 20:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2f86b48e5ac0f683fd871b7628f53c95a92bcf8563973fb035b096269a83a6/merged/etc/nvme supports timestamps until 2038 (0x7fffffff)
Jan 05 20:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2f86b48e5ac0f683fd871b7628f53c95a92bcf8563973fb035b096269a83a6/merged/etc/multipath supports timestamps until 2038 (0x7fffffff)
Jan 05 20:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2f86b48e5ac0f683fd871b7628f53c95a92bcf8563973fb035b096269a83a6/merged/var/lib/iscsi supports timestamps until 2038 (0x7fffffff)
Jan 05 20:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2f86b48e5ac0f683fd871b7628f53c95a92bcf8563973fb035b096269a83a6/merged/var/lib/libvirt supports timestamps until 2038 (0x7fffffff)
Jan 05 20:55:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/ff2f86b48e5ac0f683fd871b7628f53c95a92bcf8563973fb035b096269a83a6/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 05 20:55:50 compute-0 podman[186002]: 2026-01-05 20:55:50.062595184 +0000 UTC m=+0.136237886 container init 43547f248f357ed0221745a91d4d00a584ad4442a428f998624117fe6fc5df85 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=nova_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, config_id=edpm, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 05 20:55:50 compute-0 podman[186002]: 2026-01-05 20:55:50.072853141 +0000 UTC m=+0.146495843 container start 43547f248f357ed0221745a91d4d00a584ad4442a428f998624117fe6fc5df85 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=nova_compute, org.label-schema.schema-version=1.0, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': True, 'user': 'nova', 'restart': 'always', 'command': 'kolla_start', 'net': 'host', 'pid': 'host', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS'}, 'volumes': ['/var/lib/openstack/config/nova:/var/lib/kolla/config_files:ro', '/var/lib/openstack/cacerts/nova/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/etc/localtime:/etc/localtime:ro', '/lib/modules:/lib/modules:ro', '/dev:/dev', '/var/lib/libvirt:/var/lib/libvirt', '/run/libvirt:/run/libvirt:shared', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/iscsi:/var/lib/iscsi', '/etc/multipath:/etc/multipath', '/etc/multipath.conf:/etc/multipath.conf:ro,Z', '/etc/iscsi:/etc/iscsi:ro', '/etc/nvme:/etc/nvme', '/var/lib/openstack/config/ceph:/var/lib/kolla/config_files/ceph:ro', '/etc/ssh/ssh_known_hosts:/etc/ssh/ssh_known_hosts:ro']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=edpm, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 20:55:50 compute-0 podman[186002]: nova_compute
Jan 05 20:55:50 compute-0 nova_compute[186018]: + sudo -E kolla_set_configs
Jan 05 20:55:50 compute-0 systemd[1]: Started nova_compute container.
Jan 05 20:55:50 compute-0 sudo[185909]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Validating config file
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Copying service configuration files
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Deleting /etc/nova/nova.conf
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Setting permission for /etc/nova/nova.conf
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Deleting /etc/nova/nova.conf.d/01-nova.conf
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Copying /var/lib/kolla/config_files/01-nova.conf to /etc/nova/nova.conf.d/01-nova.conf
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/01-nova.conf
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Deleting /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Copying /var/lib/kolla/config_files/25-nova-extra.conf to /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/25-nova-extra.conf
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Deleting /etc/nova/nova.conf.d/nova-blank.conf
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Copying /var/lib/kolla/config_files/nova-blank.conf to /etc/nova/nova.conf.d/nova-blank.conf
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/nova-blank.conf
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Deleting /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Copying /var/lib/kolla/config_files/02-nova-host-specific.conf to /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Setting permission for /etc/nova/nova.conf.d/02-nova-host-specific.conf
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Deleting /etc/ceph
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Creating directory /etc/ceph
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Setting permission for /etc/ceph
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Deleting /var/lib/nova/.ssh/ssh-privatekey
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-privatekey to /var/lib/nova/.ssh/ssh-privatekey
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Deleting /var/lib/nova/.ssh/config
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Copying /var/lib/kolla/config_files/ssh-config to /var/lib/nova/.ssh/config
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Deleting /usr/sbin/iscsiadm
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Copying /var/lib/kolla/config_files/run-on-host to /usr/sbin/iscsiadm
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Setting permission for /usr/sbin/iscsiadm
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Writing out command to execute
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/ssh-privatekey
Jan 05 20:55:50 compute-0 nova_compute[186018]: INFO:__main__:Setting permission for /var/lib/nova/.ssh/config
Jan 05 20:55:50 compute-0 nova_compute[186018]: ++ cat /run_command
Jan 05 20:55:50 compute-0 nova_compute[186018]: + CMD=nova-compute
Jan 05 20:55:50 compute-0 nova_compute[186018]: + ARGS=
Jan 05 20:55:50 compute-0 nova_compute[186018]: + sudo kolla_copy_cacerts
Jan 05 20:55:50 compute-0 nova_compute[186018]: + [[ ! -n '' ]]
Jan 05 20:55:50 compute-0 nova_compute[186018]: + . kolla_extend_start
Jan 05 20:55:50 compute-0 nova_compute[186018]: Running command: 'nova-compute'
Jan 05 20:55:50 compute-0 nova_compute[186018]: + echo 'Running command: '\''nova-compute'\'''
Jan 05 20:55:50 compute-0 nova_compute[186018]: + umask 0022
Jan 05 20:55:50 compute-0 nova_compute[186018]: + exec nova-compute
Jan 05 20:55:50 compute-0 sudo[186179]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-flnegedwxgraorcdlhvxgmsxhbohnhyh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646550.4351172-1292-235398274191611/AnsiballZ_podman_container.py'
Jan 05 20:55:50 compute-0 sudo[186179]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:55:51 compute-0 python3.9[186181]: ansible-containers.podman.podman_container Invoked with name=nova_compute_init state=started executable=podman detach=True debug=False force_restart=False force_delete=True generate_systemd={} image_strict=False recreate=False image=None annotation=None arch=None attach=None authfile=None blkio_weight=None blkio_weight_device=None cap_add=None cap_drop=None cgroup_conf=None cgroup_parent=None cgroupns=None cgroups=None chrootdirs=None cidfile=None cmd_args=None conmon_pidfile=None command=None cpu_period=None cpu_quota=None cpu_rt_period=None cpu_rt_runtime=None cpu_shares=None cpus=None cpuset_cpus=None cpuset_mems=None decryption_key=None delete_depend=None delete_time=None delete_volumes=None detach_keys=None device=None device_cgroup_rule=None device_read_bps=None device_read_iops=None device_write_bps=None device_write_iops=None dns=None dns_option=None dns_search=None entrypoint=None env=None env_file=None env_host=None env_merge=None etc_hosts=None expose=None gidmap=None gpus=None group_add=None group_entry=None healthcheck=None healthcheck_interval=None healthcheck_retries=None healthcheck_start_period=None health_startup_cmd=None health_startup_interval=None health_startup_retries=None health_startup_success=None health_startup_timeout=None healthcheck_timeout=None healthcheck_failure_action=None hooks_dir=None hostname=None hostuser=None http_proxy=None image_volume=None init=None init_ctr=None init_path=None interactive=None ip=None ip6=None ipc=None kernel_memory=None label=None label_file=None log_driver=None log_level=None log_opt=None mac_address=None memory=None memory_reservation=None memory_swap=None memory_swappiness=None mount=None network=None network_aliases=None no_healthcheck=None no_hosts=None oom_kill_disable=None oom_score_adj=None os=None passwd=None passwd_entry=None personality=None pid=None pid_file=None pids_limit=None platform=None pod=None pod_id_file=None preserve_fd=None preserve_fds=None privileged=None publish=None publish_all=None pull=None quadlet_dir=None quadlet_filename=None quadlet_file_mode=None quadlet_options=None rdt_class=None read_only=None read_only_tmpfs=None requires=None restart_policy=None restart_time=None retry=None retry_delay=None rm=None rmi=None rootfs=None seccomp_policy=None secrets=NOT_LOGGING_PARAMETER sdnotify=None security_opt=None shm_size=None shm_size_systemd=None sig_proxy=None stop_signal=None stop_timeout=None stop_time=None subgidname=None subuidname=None sysctl=None systemd=None timeout=None timezone=None tls_verify=None tmpfs=None tty=None uidmap=None ulimit=None umask=None unsetenv=None unsetenv_all=None user=None userns=None uts=None variant=None volume=None volumes_from=None workdir=None
Jan 05 20:55:51 compute-0 systemd[1]: Started libpod-conmon-2ae209b2f8054863215b3eb022354533f0b4a243cc160d48b924804b03e54142.scope.
Jan 05 20:55:51 compute-0 systemd[1]: Started libcrun container.
Jan 05 20:55:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb008282ef1fb7d53298130427dea3af6a141d462d49b0245e4fcdee0e8fef99/merged/usr/sbin/nova_statedir_ownership.py supports timestamps until 2038 (0x7fffffff)
Jan 05 20:55:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb008282ef1fb7d53298130427dea3af6a141d462d49b0245e4fcdee0e8fef99/merged/var/lib/nova supports timestamps until 2038 (0x7fffffff)
Jan 05 20:55:51 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fb008282ef1fb7d53298130427dea3af6a141d462d49b0245e4fcdee0e8fef99/merged/var/lib/_nova_secontext supports timestamps until 2038 (0x7fffffff)
Jan 05 20:55:51 compute-0 podman[186207]: 2026-01-05 20:55:51.456998246 +0000 UTC m=+0.159494831 container init 2ae209b2f8054863215b3eb022354533f0b4a243cc160d48b924804b03e54142 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=edpm, container_name=nova_compute_init, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']})
Jan 05 20:55:51 compute-0 podman[186207]: 2026-01-05 20:55:51.464780909 +0000 UTC m=+0.167277484 container start 2ae209b2f8054863215b3eb022354533f0b4a243cc160d48b924804b03e54142 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=edpm, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=nova_compute_init)
Jan 05 20:55:51 compute-0 python3.9[186181]: ansible-containers.podman.podman_container PODMAN-CONTAINER-DEBUG: podman start nova_compute_init
Jan 05 20:55:51 compute-0 nova_compute_init[186229]: INFO:nova_statedir:Applying nova statedir ownership
Jan 05 20:55:51 compute-0 nova_compute_init[186229]: INFO:nova_statedir:Target ownership for /var/lib/nova: 42436:42436
Jan 05 20:55:51 compute-0 nova_compute_init[186229]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/
Jan 05 20:55:51 compute-0 nova_compute_init[186229]: INFO:nova_statedir:Changing ownership of /var/lib/nova from 1000:1000 to 42436:42436
Jan 05 20:55:51 compute-0 nova_compute_init[186229]: INFO:nova_statedir:Setting selinux context of /var/lib/nova to system_u:object_r:container_file_t:s0
Jan 05 20:55:51 compute-0 nova_compute_init[186229]: INFO:nova_statedir:Checking uid: 1000 gid: 1000 path: /var/lib/nova/instances/
Jan 05 20:55:51 compute-0 nova_compute_init[186229]: INFO:nova_statedir:Changing ownership of /var/lib/nova/instances from 1000:1000 to 42436:42436
Jan 05 20:55:51 compute-0 nova_compute_init[186229]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/instances to system_u:object_r:container_file_t:s0
Jan 05 20:55:51 compute-0 nova_compute_init[186229]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/
Jan 05 20:55:51 compute-0 nova_compute_init[186229]: INFO:nova_statedir:Ownership of /var/lib/nova/.ssh already 42436:42436
Jan 05 20:55:51 compute-0 nova_compute_init[186229]: INFO:nova_statedir:Setting selinux context of /var/lib/nova/.ssh to system_u:object_r:container_file_t:s0
Jan 05 20:55:51 compute-0 nova_compute_init[186229]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/ssh-privatekey
Jan 05 20:55:51 compute-0 nova_compute_init[186229]: INFO:nova_statedir:Checking uid: 42436 gid: 42436 path: /var/lib/nova/.ssh/config
Jan 05 20:55:51 compute-0 nova_compute_init[186229]: INFO:nova_statedir:Nova statedir ownership complete
Jan 05 20:55:51 compute-0 systemd[1]: libpod-2ae209b2f8054863215b3eb022354533f0b4a243cc160d48b924804b03e54142.scope: Deactivated successfully.
Jan 05 20:55:51 compute-0 podman[186252]: 2026-01-05 20:55:51.60317537 +0000 UTC m=+0.033458252 container died 2ae209b2f8054863215b3eb022354533f0b4a243cc160d48b924804b03e54142 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=nova_compute_init, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 05 20:55:51 compute-0 sudo[186179]: pam_unix(sudo:session): session closed for user root
Jan 05 20:55:51 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-2ae209b2f8054863215b3eb022354533f0b4a243cc160d48b924804b03e54142-userdata-shm.mount: Deactivated successfully.
Jan 05 20:55:51 compute-0 systemd[1]: var-lib-containers-storage-overlay-fb008282ef1fb7d53298130427dea3af6a141d462d49b0245e4fcdee0e8fef99-merged.mount: Deactivated successfully.
Jan 05 20:55:51 compute-0 podman[186252]: 2026-01-05 20:55:51.636031785 +0000 UTC m=+0.066314607 container cleanup 2ae209b2f8054863215b3eb022354533f0b4a243cc160d48b924804b03e54142 (image=quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified, name=nova_compute_init, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'image': 'quay.io/podified-antelope-centos9/openstack-nova-compute:current-podified', 'privileged': False, 'user': 'root', 'restart': 'never', 'command': 'bash -c $* -- eval python3 /sbin/nova_statedir_ownership.py | logger -t nova_compute_init', 'net': 'none', 'security_opt': ['label=disable'], 'detach': False, 'environment': {'NOVA_STATEDIR_OWNERSHIP_SKIP': '/var/lib/nova/compute_id', '__OS_DEBUG': False}, 'volumes': ['/dev/log:/dev/log', '/var/lib/nova:/var/lib/nova:shared', '/var/lib/_nova_secontext:/var/lib/_nova_secontext:shared,z', '/var/lib/openstack/config/nova/nova_statedir_ownership.py:/sbin/nova_statedir_ownership.py:z']}, config_id=edpm, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=nova_compute_init, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 05 20:55:51 compute-0 systemd[1]: libpod-conmon-2ae209b2f8054863215b3eb022354533f0b4a243cc160d48b924804b03e54142.scope: Deactivated successfully.
Jan 05 20:55:52 compute-0 nova_compute[186018]: 2026-01-05 20:55:52.254 186022 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_linux_bridge.linux_bridge.LinuxBridgePlugin'>' with name 'linux_bridge' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 05 20:55:52 compute-0 nova_compute[186018]: 2026-01-05 20:55:52.255 186022 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_noop.noop.NoOpPlugin'>' with name 'noop' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 05 20:55:52 compute-0 nova_compute[186018]: 2026-01-05 20:55:52.255 186022 DEBUG os_vif [-] Loaded VIF plugin class '<class 'vif_plug_ovs.ovs.OvsPlugin'>' with name 'ovs' initialize /usr/lib/python3.9/site-packages/os_vif/__init__.py:44
Jan 05 20:55:52 compute-0 nova_compute[186018]: 2026-01-05 20:55:52.255 186022 INFO os_vif [-] Loaded VIF plugins: linux_bridge, noop, ovs
Jan 05 20:55:52 compute-0 sshd-session[162856]: Connection closed by 192.168.122.30 port 38500
Jan 05 20:55:52 compute-0 sshd-session[162853]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:55:52 compute-0 systemd[1]: session-23.scope: Deactivated successfully.
Jan 05 20:55:52 compute-0 systemd-logind[788]: Session 23 logged out. Waiting for processes to exit.
Jan 05 20:55:52 compute-0 systemd[1]: session-23.scope: Consumed 2min 4.644s CPU time.
Jan 05 20:55:52 compute-0 systemd-logind[788]: Removed session 23.
Jan 05 20:55:52 compute-0 nova_compute[186018]: 2026-01-05 20:55:52.419 186022 DEBUG oslo_concurrency.processutils [-] Running cmd (subprocess): grep -F node.session.scan /sbin/iscsiadm execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 20:55:52 compute-0 nova_compute[186018]: 2026-01-05 20:55:52.450 186022 DEBUG oslo_concurrency.processutils [-] CMD "grep -F node.session.scan /sbin/iscsiadm" returned: 1 in 0.030s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 20:55:52 compute-0 nova_compute[186018]: 2026-01-05 20:55:52.450 186022 DEBUG oslo_concurrency.processutils [-] 'grep -F node.session.scan /sbin/iscsiadm' failed. Not Retrying. execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:473
Jan 05 20:55:52 compute-0 nova_compute[186018]: 2026-01-05 20:55:52.972 186022 INFO nova.virt.driver [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] Loading compute driver 'libvirt.LibvirtDriver'
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.108 186022 INFO nova.compute.provider_config [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] No provider configs found in /etc/nova/provider_config/. If files are present, ensure the Nova process has access.
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.123 186022 DEBUG oslo_concurrency.lockutils [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] Acquiring lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.124 186022 DEBUG oslo_concurrency.lockutils [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] Acquired lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.124 186022 DEBUG oslo_concurrency.lockutils [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] Releasing lock "singleton_lock" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.124 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] Full set of CONF: _wait_for_exit_or_signal /usr/lib/python3.9/site-packages/oslo_service/service.py:362
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.124 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.124 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.125 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] command line args: [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.125 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] config files: ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.125 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.125 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] allow_resize_to_same_host      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.125 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] arq_binding_timeout            = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.125 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] backdoor_port                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.125 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] backdoor_socket                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.126 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] block_device_allocate_retries  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.126 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] block_device_allocate_retries_interval = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.126 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cert                           = self.pem log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.126 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] compute_driver                 = libvirt.LibvirtDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.126 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] compute_monitors               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.126 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] config_dir                     = ['/etc/nova/nova.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.127 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] config_drive_format            = iso9660 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.127 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] config_file                    = ['/etc/nova/nova.conf', '/etc/nova/nova-compute.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.127 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.127 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] console_host                   = compute-0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.127 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] control_exchange               = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.127 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cpu_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.128 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] daemon                         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.128 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.128 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] default_access_ip_network_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.128 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] default_availability_zone      = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.128 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] default_ephemeral_format       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.128 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'glanceclient=WARN', 'oslo.privsep.daemon=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.129 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] default_schedule_zone          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.129 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] disk_allocation_ratio          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.129 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] enable_new_services            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.129 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] enabled_apis                   = ['osapi_compute', 'metadata'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.129 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] enabled_ssl_apis               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.129 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] flat_injected                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.129 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] force_config_drive             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.130 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] force_raw_images               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.130 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.130 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] heal_instance_info_cache_interval = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.130 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.130 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] initial_cpu_allocation_ratio   = 4.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.130 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] initial_disk_allocation_ratio  = 0.9 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.131 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] initial_ram_allocation_ratio   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.131 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] injected_network_template      = /usr/lib/python3.9/site-packages/nova/virt/interfaces.template log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.131 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] instance_build_timeout         = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.131 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] instance_delete_interval       = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.132 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.132 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] instance_name_template         = instance-%08x log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.132 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] instance_usage_audit           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.132 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] instance_usage_audit_period    = month log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.132 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.132 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] instances_path                 = /var/lib/nova/instances log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.132 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] internal_service_availability_zone = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.133 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] key                            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.133 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] live_migration_retry_count     = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.133 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.133 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.133 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] log_dir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.133 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] log_file                       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.133 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.134 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.134 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.134 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] log_rotation_type              = size log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.134 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.134 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.134 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.134 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.135 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.135 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] long_rpc_timeout               = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.135 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] max_concurrent_builds          = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.135 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] max_concurrent_live_migrations = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.135 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] max_concurrent_snapshots       = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.135 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] max_local_block_devices        = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.135 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] max_logfile_count              = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.135 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] max_logfile_size_mb            = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.136 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] maximum_instance_delete_attempts = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.136 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] metadata_listen                = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.136 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] metadata_listen_port           = 8775 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.136 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] metadata_workers               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.136 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] migrate_max_retries            = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.136 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] mkisofs_cmd                    = /usr/bin/mkisofs log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.136 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] my_block_storage_ip            = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.137 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] my_ip                          = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.137 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] network_allocate_retries       = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.137 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] non_inheritable_image_properties = ['cache_in_nova', 'bittorrent'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.137 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] osapi_compute_listen           = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.137 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] osapi_compute_listen_port      = 8774 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.137 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] osapi_compute_unique_server_name_scope =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.137 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] osapi_compute_workers          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.138 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] password_length                = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.138 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] periodic_enable                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.138 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] periodic_fuzzy_delay           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.138 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] pointer_model                  = usbtablet log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.138 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] preallocate_images             = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.138 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.138 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] pybasedir                      = /usr/lib/python3.9/site-packages log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.139 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ram_allocation_ratio           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.139 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.139 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.139 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.139 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] reboot_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.139 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] reclaim_instance_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.139 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] record                         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.139 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] reimage_timeout_per_gb         = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.140 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] report_interval                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.140 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] rescue_timeout                 = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.140 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] reserved_host_cpus             = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.140 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] reserved_host_disk_mb          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.140 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] reserved_host_memory_mb        = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.140 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] reserved_huge_pages            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.140 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] resize_confirm_window          = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.141 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] resize_fs_using_block_device   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.141 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] resume_guests_state_on_host_boot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.141 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] rootwrap_config                = /etc/nova/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.141 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] rpc_response_timeout           = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.141 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] run_external_periodic_tasks    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.141 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] running_deleted_instance_action = reap log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.141 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] running_deleted_instance_poll_interval = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.141 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] running_deleted_instance_timeout = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.142 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] scheduler_instance_sync_interval = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.142 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] service_down_time              = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.142 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] servicegroup_driver            = db log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.142 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] shelved_offload_time           = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.142 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] shelved_poll_interval          = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.142 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] shutdown_timeout               = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.142 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] source_is_ipv6                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.143 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ssl_only                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.143 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] state_path                     = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.143 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] sync_power_state_interval      = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.143 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] sync_power_state_pool_size     = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.143 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.143 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] tempdir                        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.143 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] timeout_nbd                    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.144 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.144 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] update_resources_interval      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.144 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] use_cow_images                 = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.144 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.144 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.144 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.144 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] use_rootwrap_daemon            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.144 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.145 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.145 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vcpu_pin_set                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.145 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vif_plugging_is_fatal          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.145 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vif_plugging_timeout           = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.145 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] virt_mkfs                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.145 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] volume_usage_poll_interval     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.145 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.146 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] web                            = /usr/share/spice-html5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.146 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_concurrency.disable_process_locking = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.146 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_concurrency.lock_path     = /var/lib/nova/tmp log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.146 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_metrics.metrics_buffer_size = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.146 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_metrics.metrics_enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.146 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_metrics.metrics_process_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.146 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_metrics.metrics_socket_file = /var/tmp/metrics_collector.sock log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.147 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_metrics.metrics_thread_stop_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.147 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.auth_strategy              = keystone log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.147 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.compute_link_prefix        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.147 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.config_drive_skip_versions = 1.0 2007-01-19 2007-03-01 2007-08-29 2007-10-10 2007-12-15 2008-02-01 2008-09-01 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.147 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.dhcp_domain                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.147 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.enable_instance_password   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.147 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.glance_link_prefix         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.148 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.instance_list_cells_batch_fixed_size = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.148 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.instance_list_cells_batch_strategy = distributed log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.148 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.instance_list_per_project_cells = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.148 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.list_records_by_skipping_down_cells = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.148 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.local_metadata_per_cell    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.148 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.max_limit                  = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.148 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.metadata_cache_expiration  = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.149 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.neutron_default_tenant_id  = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.149 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.use_forwarded_for          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.149 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.use_neutron_default_nets   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.149 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.vendordata_dynamic_connect_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.150 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.vendordata_dynamic_failure_fatal = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.150 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.vendordata_dynamic_read_timeout = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.150 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.vendordata_dynamic_ssl_certfile =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.151 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.vendordata_dynamic_targets = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.151 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.vendordata_jsonfile_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.151 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api.vendordata_providers       = ['StaticJSON'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.151 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.backend                  = oslo_cache.dict log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.151 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.backend_argument         = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.152 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.config_prefix            = cache.oslo log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.152 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.dead_timeout             = 60.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.152 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.debug_cache_backend      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.152 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.enable_retry_client      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.152 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.enable_socket_keepalive  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.152 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.enabled                  = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.153 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.expiration_time          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.153 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.hashclient_retry_attempts = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.153 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.hashclient_retry_delay   = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.153 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.memcache_dead_retry      = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.153 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.memcache_password        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.153 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.memcache_pool_connection_get_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.154 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.memcache_pool_flush_on_reconnect = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.154 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.memcache_pool_maxsize    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.154 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.memcache_pool_unused_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.154 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.memcache_sasl_enabled    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.154 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.memcache_servers         = ['localhost:11211'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.154 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.memcache_socket_timeout  = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.154 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.memcache_username        =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.155 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.proxies                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.155 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.retry_attempts           = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.155 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.retry_delay              = 0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.155 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.socket_keepalive_count   = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.155 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.socket_keepalive_idle    = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.155 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.socket_keepalive_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.155 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.tls_allowed_ciphers      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.156 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.tls_cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.156 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.tls_certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.156 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.tls_enabled              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.156 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cache.tls_keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.156 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cinder.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.156 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cinder.auth_type               = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.156 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cinder.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.157 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cinder.catalog_info            = volumev3:cinderv3:internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.157 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cinder.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.157 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cinder.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.157 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cinder.cross_az_attach         = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.157 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cinder.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.157 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cinder.endpoint_template       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.158 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cinder.http_retries            = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.158 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cinder.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.158 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cinder.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.158 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cinder.os_region_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.158 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cinder.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.159 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cinder.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.159 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] compute.consecutive_build_service_disable_threshold = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.159 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] compute.cpu_dedicated_set      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.159 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] compute.cpu_shared_set         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.159 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] compute.image_type_exclude_list = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.159 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] compute.live_migration_wait_for_vif_plug = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.160 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] compute.max_concurrent_disk_ops = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.160 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] compute.max_disk_devices_to_attach = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.160 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] compute.packing_host_numa_cells_allocation_strategy = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.160 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] compute.provider_config_location = /etc/nova/provider_config/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.160 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] compute.resource_provider_association_refresh = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.160 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] compute.shutdown_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.161 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] compute.vmdk_allowed_types     = ['streamOptimized', 'monolithicSparse'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.161 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] conductor.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.161 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] console.allowed_origins        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.161 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] console.ssl_ciphers            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.161 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] console.ssl_minimum_version    = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.161 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] consoleauth.token_ttl          = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.162 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.162 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.162 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.162 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.162 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.163 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.163 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.163 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.163 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.163 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.163 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.163 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.164 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.service_type            = accelerator log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.164 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.164 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.164 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.164 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.164 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.165 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] cyborg.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.165 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.backend               = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.165 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.connection            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.165 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.connection_debug      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.165 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.165 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.165 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.connection_trace      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.166 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.166 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.db_max_retries        = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.166 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.166 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.db_retry_interval     = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.166 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.max_overflow          = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.166 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.max_pool_size         = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.166 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.max_retries           = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.167 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.mysql_enable_ndb      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.167 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.mysql_sql_mode        = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.167 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.167 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.pool_timeout          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.167 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.retry_interval        = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.167 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.slave_connection      = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.168 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] database.sqlite_synchronous    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.168 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.backend           = sqlalchemy log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.168 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.connection        = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.168 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.connection_debug  = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.168 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.connection_parameters =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.169 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.connection_recycle_time = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.169 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.connection_trace  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.169 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.db_inc_retry_interval = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.169 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.db_max_retries    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.169 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.db_max_retry_interval = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.170 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.db_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.170 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.max_overflow      = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.170 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.max_pool_size     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.170 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.max_retries       = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.170 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.mysql_enable_ndb  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.171 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.mysql_sql_mode    = TRADITIONAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.171 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.mysql_wsrep_sync_wait = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.171 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.pool_timeout      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.171 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.retry_interval    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.171 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.slave_connection  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.172 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] api_database.sqlite_synchronous = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.172 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] devices.enabled_mdev_types     = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.172 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ephemeral_storage_encryption.cipher = aes-xts-plain64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.172 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ephemeral_storage_encryption.enabled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.172 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ephemeral_storage_encryption.key_size = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.172 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.api_servers             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.173 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.173 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.173 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.173 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.173 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.173 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.debug                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.173 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.default_trusted_certificate_ids = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.174 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.enable_certificate_validation = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.174 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.enable_rbd_download     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.174 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.174 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.174 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.174 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.174 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.175 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.num_retries             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.175 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.rbd_ceph_conf           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.175 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.rbd_connect_timeout     = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.175 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.rbd_pool                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.175 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.rbd_user                =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.175 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.region_name             = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.176 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.176 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.service_type            = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.176 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.176 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.176 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.177 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.177 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.valid_interfaces        = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.177 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.verify_glance_signatures = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.177 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] glance.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.177 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] guestfs.debug                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.177 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.config_drive_cdrom      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.177 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.config_drive_inject_password = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.178 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.dynamic_memory_ratio    = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.178 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.enable_instance_metrics_collection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.178 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.enable_remotefx         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.178 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.instances_path_share    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.178 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.iscsi_initiator_list    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.178 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.limit_cpu_features      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.178 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.mounted_disk_query_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.179 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.mounted_disk_query_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.179 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.power_state_check_timeframe = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.179 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.power_state_event_polling_interval = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.179 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.qemu_img_cmd            = qemu-img.exe log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.179 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.use_multipath_io        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.179 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.volume_attach_retry_count = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.179 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.volume_attach_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.180 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.vswitch_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.180 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] hyperv.wait_soft_reboot_seconds = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.180 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] mks.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.180 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] mks.mksproxy_base_url          = http://127.0.0.1:6090/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.180 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] image_cache.manager_interval   = 2400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.181 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] image_cache.precache_concurrency = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.181 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] image_cache.remove_unused_base_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.181 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] image_cache.remove_unused_original_minimum_age_seconds = 86400 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.181 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] image_cache.remove_unused_resized_minimum_age_seconds = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.181 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] image_cache.subdirectory_name  = _base log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.181 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.api_max_retries         = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.181 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.api_retry_interval      = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.182 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.auth_section            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.182 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.auth_type               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.182 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.cafile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.182 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.certfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.182 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.collect_timing          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.182 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.connect_retries         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.183 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.connect_retry_delay     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.183 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.endpoint_override       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.183 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.183 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.keyfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.183 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.max_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.183 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.min_version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.183 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.partition_key           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.184 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.peer_list               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.184 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.region_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.184 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.serial_console_state_timeout = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.184 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.service_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.184 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.service_type            = baremetal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.184 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.split_loggers           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.184 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.status_code_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.185 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.185 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.timeout                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.185 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.valid_interfaces        = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.185 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ironic.version                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.185 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] key_manager.backend            = barbican log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.185 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] key_manager.fixed_key          = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.185 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.auth_endpoint         = http://localhost/identity/v3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.186 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.barbican_api_version  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.186 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.barbican_endpoint     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.186 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.barbican_endpoint_type = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.186 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.barbican_region_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.186 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.186 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.187 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.187 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.187 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.187 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.number_of_retries     = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.187 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.retry_delay           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.187 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.send_service_user_token = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.188 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.188 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.188 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.verify_ssl            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.188 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican.verify_ssl_path       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.189 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican_service_user.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.189 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican_service_user.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.189 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican_service_user.cafile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.189 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican_service_user.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.189 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican_service_user.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.189 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican_service_user.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.189 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican_service_user.keyfile  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.190 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican_service_user.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.190 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] barbican_service_user.timeout  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.190 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vault.approle_role_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.190 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vault.approle_secret_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.190 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vault.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.190 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vault.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.190 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vault.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.191 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vault.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.191 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vault.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.191 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vault.kv_mountpoint            = secret log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.191 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vault.kv_version               = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.191 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vault.namespace                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.191 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vault.root_token_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.191 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vault.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.192 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vault.ssl_ca_crt_file          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.192 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vault.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.192 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vault.use_ssl                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.192 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vault.vault_url                = http://127.0.0.1:8200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.192 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.cafile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.192 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.certfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.192 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.collect_timing        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.193 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.connect_retries       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.193 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.connect_retry_delay   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.193 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.endpoint_override     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.193 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.insecure              = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.193 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.keyfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.193 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.max_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.194 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.min_version           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.194 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.region_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.194 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.service_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.194 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.service_type          = identity log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.194 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.split_loggers         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.194 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.status_code_retries   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.195 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.195 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.timeout               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.195 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.valid_interfaces      = ['internal', 'public'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.195 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] keystone.version               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.195 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.connection_uri         =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.196 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.cpu_mode               = host-model log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.196 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.cpu_model_extra_flags  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.196 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.cpu_models             = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.196 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.cpu_power_governor_high = performance log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.196 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.cpu_power_governor_low = powersave log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.196 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.cpu_power_management   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.196 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.cpu_power_management_strategy = cpu_state log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.197 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.device_detach_attempts = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.197 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.device_detach_timeout  = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.197 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.disk_cachemodes        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.197 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.disk_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.197 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.enabled_perf_events    = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.197 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.file_backed_memory     = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.197 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.gid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.198 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.hw_disk_discard        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.198 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.hw_machine_type        = ['x86_64=q35'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.198 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.images_rbd_ceph_conf   =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.198 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.images_rbd_glance_copy_poll_interval = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.198 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.images_rbd_glance_copy_timeout = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.198 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.images_rbd_glance_store_name =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.199 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.images_rbd_pool        = rbd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.199 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.images_type            = qcow2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.199 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.images_volume_group    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.199 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.inject_key             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.199 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.inject_partition       = -2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.199 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.inject_password        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.200 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.iscsi_iface            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.200 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.iser_use_multipath     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.200 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.live_migration_bandwidth = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.200 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.live_migration_completion_timeout = 800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.200 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.live_migration_downtime = 500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.200 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.live_migration_downtime_delay = 75 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.200 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.live_migration_downtime_steps = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.200 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.live_migration_inbound_addr = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.201 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.live_migration_permit_auto_converge = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.201 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.live_migration_permit_post_copy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.201 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.live_migration_scheme  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.201 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.live_migration_timeout_action = force_complete log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.201 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.live_migration_tunnelled = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.201 186022 WARNING oslo_config.cfg [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] Deprecated: Option "live_migration_uri" from group "libvirt" is deprecated for removal (
Jan 05 20:55:53 compute-0 nova_compute[186018]: live_migration_uri is deprecated for removal in favor of two other options that
Jan 05 20:55:53 compute-0 nova_compute[186018]: allow to change live migration scheme and target URI: ``live_migration_scheme``
Jan 05 20:55:53 compute-0 nova_compute[186018]: and ``live_migration_inbound_addr`` respectively.
Jan 05 20:55:53 compute-0 nova_compute[186018]: ).  Its value may be silently ignored in the future.
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.202 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.live_migration_uri     = qemu+tls://%s/system log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.202 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.live_migration_with_native_tls = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.202 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.max_queues             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.202 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.mem_stats_period_seconds = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.202 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.nfs_mount_options      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.203 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.nfs_mount_point_base   = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.203 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.num_aoe_discover_tries = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.203 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.num_iser_scan_tries    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.203 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.num_memory_encrypted_guests = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.203 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.num_nvme_discover_tries = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.203 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.num_pcie_ports         = 24 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.203 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.num_volume_scan_tries  = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.204 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.pmem_namespaces        = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.204 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.quobyte_client_cfg     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.204 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.quobyte_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.204 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.rbd_connect_timeout    = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.204 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.rbd_destroy_volume_retries = 12 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.204 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.rbd_destroy_volume_retry_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.205 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.rbd_secret_uuid        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.205 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.rbd_user               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.205 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.realtime_scheduler_priority = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.205 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.remote_filesystem_transport = ssh log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.205 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.rescue_image_id        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.205 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.rescue_kernel_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.205 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.rescue_ramdisk_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.206 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.rng_dev_path           = /dev/urandom log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.206 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.rx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.206 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.smbfs_mount_options    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.206 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.smbfs_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.206 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.snapshot_compression   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.206 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.snapshot_image_format  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.207 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.snapshots_directory    = /var/lib/nova/instances/snapshots log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.207 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.sparse_logical_volumes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.207 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.swtpm_enabled          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.207 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.swtpm_group            = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.207 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.swtpm_user             = tss log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.207 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.sysinfo_serial         = unique log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.208 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.tx_queue_size          = 512 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.208 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.uid_maps               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.208 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.use_virtio_for_bridges = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.208 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.virt_type              = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.208 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.volume_clear           = zero log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.208 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.volume_clear_size      = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.208 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.volume_use_multipath   = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.209 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.vzstorage_cache_path   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.209 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.vzstorage_log_path     = /var/log/vstorage/%(cluster_name)s/nova.log.gz log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.209 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.vzstorage_mount_group  = qemu log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.209 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.vzstorage_mount_opts   = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.209 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.vzstorage_mount_perms  = 0770 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.209 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.vzstorage_mount_point_base = /var/lib/nova/mnt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.210 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.vzstorage_mount_user   = stack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.210 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] libvirt.wait_soft_reboot_seconds = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.210 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.auth_section           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.210 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.auth_type              = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.210 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.211 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.211 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.211 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.connect_retries        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.211 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.connect_retry_delay    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.211 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.default_floating_pool  = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.212 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.endpoint_override      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.212 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.extension_sync_interval = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.212 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.http_retries           = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.212 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.212 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.212 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.max_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.212 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.metadata_proxy_shared_secret = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.213 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.min_version            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.213 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.ovs_bridge             = br-int log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.213 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.physnets               = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.213 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.region_name            = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.213 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.service_metadata_proxy = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.213 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.service_name           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.214 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.service_type           = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.214 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.214 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.status_code_retries    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.214 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.214 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.214 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.valid_interfaces       = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.215 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] neutron.version                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.215 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] notifications.bdms_in_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.215 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] notifications.default_level    = INFO log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.215 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] notifications.notification_format = unversioned log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.215 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] notifications.notify_on_state_change = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.215 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] notifications.versioned_notifications_topics = ['versioned_notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.216 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] pci.alias                      = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.216 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] pci.device_spec                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.216 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] pci.report_in_placement        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.216 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.auth_section         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.216 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.auth_type            = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.216 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.auth_url             = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.217 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.cafile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.217 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.certfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.217 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.collect_timing       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.217 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.connect_retries      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.217 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.connect_retry_delay  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.217 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.default_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.217 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.default_domain_name  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.218 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.domain_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.218 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.domain_name          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.218 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.endpoint_override    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.218 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.insecure             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.218 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.keyfile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.218 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.max_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.219 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.min_version          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.219 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.password             = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.219 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.project_domain_id    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.219 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.project_domain_name  = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.219 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.project_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.219 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.project_name         = service log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.219 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.region_name          = regionOne log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.220 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.service_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.220 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.service_type         = placement log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.220 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.split_loggers        = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.220 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.status_code_retries  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.220 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.220 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.system_scope         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.220 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.timeout              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.221 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.trust_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.221 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.user_domain_id       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.221 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.user_domain_name     = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.221 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.user_id              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.221 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.username             = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.221 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.valid_interfaces     = ['internal'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.222 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] placement.version              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.222 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] quota.cores                    = 20 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.222 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] quota.count_usage_from_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.222 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] quota.driver                   = nova.quota.DbQuotaDriver log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.222 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] quota.injected_file_content_bytes = 10240 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.222 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] quota.injected_file_path_length = 255 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.222 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] quota.injected_files           = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.223 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] quota.instances                = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.223 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] quota.key_pairs                = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.223 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] quota.metadata_items           = 128 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.223 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] quota.ram                      = 51200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.223 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] quota.recheck_quota            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.223 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] quota.server_group_members     = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.224 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] quota.server_groups            = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.224 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] rdp.enabled                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.224 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] rdp.html5_proxy_base_url       = http://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.224 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] scheduler.discover_hosts_in_cells_interval = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.225 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] scheduler.enable_isolated_aggregate_filtering = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.225 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] scheduler.image_metadata_prefilter = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.225 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] scheduler.limit_tenants_to_placement_aggregate = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.225 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] scheduler.max_attempts         = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.225 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] scheduler.max_placement_results = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.226 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] scheduler.placement_aggregate_required_for_tenants = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.226 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] scheduler.query_placement_for_availability_zone = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.226 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] scheduler.query_placement_for_image_type_support = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.226 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] scheduler.query_placement_for_routed_network_aggregates = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.226 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] scheduler.workers              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.226 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_namespace = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.227 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.aggregate_image_properties_isolation_separator = . log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.227 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.available_filters = ['nova.scheduler.filters.all_filters'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.227 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.build_failure_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.227 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.cpu_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.227 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.cross_cell_move_weight_multiplier = 1000000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.227 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.disk_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.227 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.enabled_filters = ['ComputeFilter', 'ComputeCapabilitiesFilter', 'ImagePropertiesFilter', 'ServerGroupAntiAffinityFilter', 'ServerGroupAffinityFilter'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.228 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.host_subset_size = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.228 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.image_properties_default_architecture = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.228 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.io_ops_weight_multiplier = -1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.228 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.isolated_hosts = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.228 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.isolated_images = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.228 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.max_instances_per_host = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.229 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.max_io_ops_per_host = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.229 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.pci_in_placement = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.229 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.pci_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.229 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.ram_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.229 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.restrict_isolated_hosts_to_isolated_images = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.229 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.shuffle_best_same_weighed_hosts = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.229 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.soft_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.230 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.soft_anti_affinity_weight_multiplier = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.230 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.track_instance_changes = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.230 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] filter_scheduler.weight_classes = ['nova.scheduler.weights.all_weighers'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.230 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] metrics.required               = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.230 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] metrics.weight_multiplier      = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.230 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] metrics.weight_of_unavailable  = -10000.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.230 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] metrics.weight_setting         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.231 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] serial_console.base_url        = ws://127.0.0.1:6083/ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.231 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] serial_console.enabled         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.231 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] serial_console.port_range      = 10000:20000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.231 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] serial_console.proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.231 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] serial_console.serialproxy_host = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.232 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] serial_console.serialproxy_port = 6083 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.232 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] service_user.auth_section      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.232 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] service_user.auth_type         = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.232 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] service_user.cafile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.232 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] service_user.certfile          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.232 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] service_user.collect_timing    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.232 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] service_user.insecure          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.233 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] service_user.keyfile           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.233 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] service_user.send_service_user_token = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.233 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] service_user.split_loggers     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.233 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] service_user.timeout           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.233 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] spice.agent_enabled            = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.233 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] spice.enabled                  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.234 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] spice.html5proxy_base_url      = http://127.0.0.1:6082/spice_auto.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.234 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] spice.html5proxy_host          = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.234 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] spice.html5proxy_port          = 6082 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.234 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] spice.image_compression        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.234 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] spice.jpeg_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.234 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] spice.playback_compression     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.234 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] spice.server_listen            = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.235 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] spice.server_proxyclient_address = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.235 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] spice.streaming_mode           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.235 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] spice.zlib_compression         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.235 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] upgrade_levels.baseapi         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.235 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] upgrade_levels.cert            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.235 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] upgrade_levels.compute         = auto log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.236 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] upgrade_levels.conductor       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.236 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] upgrade_levels.scheduler       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.236 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vendordata_dynamic_auth.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.236 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vendordata_dynamic_auth.auth_type = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.236 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vendordata_dynamic_auth.cafile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.236 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vendordata_dynamic_auth.certfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.236 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vendordata_dynamic_auth.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.236 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vendordata_dynamic_auth.insecure = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.237 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vendordata_dynamic_auth.keyfile = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.237 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vendordata_dynamic_auth.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.237 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vendordata_dynamic_auth.timeout = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.237 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.237 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.237 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.cache_prefix            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.237 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.cluster_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.238 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.connection_pool_size    = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.238 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.console_delay_seconds   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.238 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.datastore_regex         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.238 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.host_ip                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.238 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.238 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.238 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.host_username           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.239 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.239 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.integration_bridge      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.239 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.maximum_objects         = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.239 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.pbm_default_policy      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.239 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.pbm_enabled             = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.239 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.pbm_wsdl_location       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.240 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.serial_log_dir          = /opt/vmware/vspc log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.240 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.serial_port_proxy_uri   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.240 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.serial_port_service_uri = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.240 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.240 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.use_linked_clone        = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.240 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.vnc_keymap              = en-us log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.240 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.vnc_port                = 5900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.241 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vmware.vnc_port_total          = 10000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.241 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vnc.auth_schemes               = ['none'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.241 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vnc.enabled                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.241 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vnc.novncproxy_base_url        = https://nova-novncproxy-cell1-public-openstack.apps-crc.testing/vnc_lite.html log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.241 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vnc.novncproxy_host            = 0.0.0.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.242 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vnc.novncproxy_port            = 6080 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.242 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vnc.server_listen              = ::0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.242 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vnc.server_proxyclient_address = 192.168.122.100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.242 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vnc.vencrypt_ca_certs          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.242 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vnc.vencrypt_client_cert       = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.242 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vnc.vencrypt_client_key        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.243 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.disable_compute_service_check_for_ffu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.243 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.disable_deep_image_inspection = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.243 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.disable_fallback_pcpu_query = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.243 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.disable_group_policy_check_upcall = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.243 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.disable_libvirt_livesnapshot = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.243 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.disable_rootwrap   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.243 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.enable_numa_live_migration = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.244 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.enable_qemu_monitor_announce_self = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.244 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.ensure_libvirt_rbd_instance_dir_cleanup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.244 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.handle_virt_lifecycle_events = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.244 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.libvirt_disable_apic = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.244 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.never_download_image_if_on_rbd = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.245 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.qemu_monitor_announce_self_count = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.245 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.qemu_monitor_announce_self_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.245 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.reserve_disk_resource_for_image_cache = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.245 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.skip_cpu_compare_at_startup = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.245 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.skip_cpu_compare_on_dest = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.245 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.skip_hypervisor_version_check_on_lm = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.246 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.skip_reserve_in_use_ironic_nodes = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.246 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.unified_limits_count_pcpu_as_vcpu = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.246 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] workarounds.wait_for_vif_plugged_event_during_hard_reboot = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.246 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] wsgi.api_paste_config          = api-paste.ini log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.246 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] wsgi.client_socket_timeout     = 900 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.246 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] wsgi.default_pool_size         = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.247 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] wsgi.keep_alive                = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.247 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] wsgi.max_header_line           = 16384 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.247 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] wsgi.secure_proxy_ssl_header   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.247 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] wsgi.ssl_ca_file               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.247 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] wsgi.ssl_cert_file             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.247 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] wsgi.ssl_key_file              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.247 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] wsgi.tcp_keepidle              = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.248 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] wsgi.wsgi_log_format           = %(client_ip)s "%(request_line)s" status: %(status_code)s len: %(body_length)s time: %(wall_seconds).7f log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.248 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] zvm.ca_file                    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.248 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] zvm.cloud_connector_url        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.248 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] zvm.image_tmp_path             = /var/lib/nova/images log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.248 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] zvm.reachable_timeout          = 300 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.249 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_policy.enforce_new_defaults = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.250 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_policy.enforce_scope      = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.250 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_policy.policy_default_rule = default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.251 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_policy.policy_dirs        = ['policy.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.251 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_policy.policy_file        = policy.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.251 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_policy.remote_content_type = application/x-www-form-urlencoded log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.251 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_policy.remote_ssl_ca_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.251 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_policy.remote_ssl_client_crt_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.252 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_policy.remote_ssl_client_key_file = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.252 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_policy.remote_ssl_verify_server_crt = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.252 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_versionedobjects.fatal_exception_format_errors = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.252 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_middleware.http_basic_auth_user_file = /etc/htpasswd log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.252 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] remote_debug.host              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.252 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] remote_debug.port              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.253 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.253 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.amqp_durable_queues = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.253 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.253 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.253 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.254 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.254 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.254 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.254 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.254 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.254 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.254 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.255 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.255 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.255 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.255 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.255 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.255 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.255 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.256 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.256 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.rabbit_quorum_queue = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.256 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.256 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.256 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.256 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.256 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.257 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.257 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.257 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.257 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.257 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.257 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.257 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.258 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.258 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.258 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.auth_section        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.258 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.auth_type           = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.258 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.auth_url            = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.258 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.cafile              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.259 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.certfile            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.259 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.collect_timing      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.259 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.connect_retries     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.259 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.connect_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.259 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.default_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.259 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.default_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.259 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.domain_id           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.260 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.domain_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.260 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.endpoint_id         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.260 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.endpoint_override   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.260 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.insecure            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.260 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.keyfile             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.260 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.max_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.260 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.min_version         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.261 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.password            = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.261 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.project_domain_id   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.261 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.project_domain_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.261 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.project_id          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.261 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.project_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.261 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.region_name         = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.261 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.service_name        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.262 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.service_type        = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.262 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.split_loggers       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.262 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.status_code_retries = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.262 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.status_code_retry_delay = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.262 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.system_scope        = all log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.262 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.timeout             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.262 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.trust_id            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.263 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.user_domain_id      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.263 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.user_domain_name    = Default log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.263 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.user_id             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.263 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.username            = nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.263 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.valid_interfaces    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.263 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_limit.version             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.263 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_reports.file_event_handler = /var/lib/nova log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.264 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.264 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.264 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vif_plug_linux_bridge_privileged.capabilities = [12] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.264 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vif_plug_linux_bridge_privileged.group = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.264 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vif_plug_linux_bridge_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.264 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vif_plug_linux_bridge_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.264 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vif_plug_linux_bridge_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.265 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vif_plug_linux_bridge_privileged.user = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.265 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vif_plug_ovs_privileged.capabilities = [12, 1] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.265 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vif_plug_ovs_privileged.group  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.265 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vif_plug_ovs_privileged.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.265 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vif_plug_ovs_privileged.logger_name = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.265 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vif_plug_ovs_privileged.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.265 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] vif_plug_ovs_privileged.user   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.266 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_vif_linux_bridge.flat_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.266 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_vif_linux_bridge.forward_bridge_interface = ['all'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.266 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_vif_linux_bridge.iptables_bottom_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.266 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_vif_linux_bridge.iptables_drop_action = DROP log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.266 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_vif_linux_bridge.iptables_top_regex =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.266 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_vif_linux_bridge.network_device_mtu = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.266 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_vif_linux_bridge.use_ipv6   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.267 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_vif_linux_bridge.vlan_interface = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.267 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_vif_ovs.isolate_vif         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.267 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_vif_ovs.network_device_mtu  = 1500 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.267 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_vif_ovs.ovs_vsctl_timeout   = 120 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.267 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_vif_ovs.ovsdb_connection    = tcp:127.0.0.1:6640 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.267 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_vif_ovs.ovsdb_interface     = native log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.267 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_vif_ovs.per_port_bridge     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.268 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_brick.lock_path             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.268 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_brick.wait_mpath_device_attempts = 4 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.268 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] os_brick.wait_mpath_device_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.268 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] privsep_osbrick.capabilities   = [21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.268 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] privsep_osbrick.group          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.268 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] privsep_osbrick.helper_command = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.268 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] privsep_osbrick.logger_name    = os_brick.privileged log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.269 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] privsep_osbrick.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.269 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] privsep_osbrick.user           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.269 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] nova_sys_admin.capabilities    = [0, 1, 2, 3, 12, 21] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.269 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] nova_sys_admin.group           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.269 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] nova_sys_admin.helper_command  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.269 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] nova_sys_admin.logger_name     = oslo_privsep.daemon log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.269 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] nova_sys_admin.thread_pool_size = 8 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.270 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] nova_sys_admin.user            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.270 186022 DEBUG oslo_service.service [None req-e07c45aa-d85a-40be-81b3-920085160de0 - - - - - -] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.271 186022 INFO nova.service [-] Starting compute node (version 27.5.2-0.20250829104910.6f8decf.el9)
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.287 186022 INFO nova.virt.node [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Determined node identity 98d67ab0-e613-4c26-9eaa-22cf91b060a7 from /var/lib/nova/compute_id
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.288 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Starting native event thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:492
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.289 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Starting green dispatch thread _init_events /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:498
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.290 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Starting connection event dispatch thread initialize /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:620
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.290 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Connecting to libvirt: qemu:///system _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:503
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.309 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Registering for lifecycle events <nova.virt.libvirt.host.Host object at 0x7f5d32dbe6a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:509
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.311 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Registering for connection events: <nova.virt.libvirt.host.Host object at 0x7f5d32dbe6a0> _get_new_connection /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:530
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.312 186022 INFO nova.virt.libvirt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Connection event '1' reason 'None'
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.320 186022 INFO nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Libvirt host capabilities <capabilities>
Jan 05 20:55:53 compute-0 nova_compute[186018]: 
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <host>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <uuid>103e5390-173f-4d3f-9983-22472b3a8bf4</uuid>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <cpu>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <arch>x86_64</arch>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model>EPYC-Rome-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <vendor>AMD</vendor>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <microcode version='16777317'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <signature family='23' model='49' stepping='0'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <topology sockets='8' dies='1' clusters='1' cores='1' threads='1'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <maxphysaddr mode='emulate' bits='40'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='x2apic'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='tsc-deadline'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='osxsave'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='hypervisor'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='tsc_adjust'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='spec-ctrl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='stibp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='arch-capabilities'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='ssbd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='cmp_legacy'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='topoext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='virt-ssbd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='lbrv'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='tsc-scale'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='vmcb-clean'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='pause-filter'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='pfthreshold'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='svme-addr-chk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='rdctl-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='skip-l1dfl-vmentry'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='mds-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature name='pschange-mc-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <pages unit='KiB' size='4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <pages unit='KiB' size='2048'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <pages unit='KiB' size='1048576'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </cpu>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <power_management>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <suspend_mem/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <suspend_disk/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <suspend_hybrid/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </power_management>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <iommu support='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <migration_features>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <live/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <uri_transports>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <uri_transport>tcp</uri_transport>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <uri_transport>rdma</uri_transport>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </uri_transports>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </migration_features>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <topology>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <cells num='1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <cell id='0'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:           <memory unit='KiB'>7864312</memory>
Jan 05 20:55:53 compute-0 nova_compute[186018]:           <pages unit='KiB' size='4'>1966078</pages>
Jan 05 20:55:53 compute-0 nova_compute[186018]:           <pages unit='KiB' size='2048'>0</pages>
Jan 05 20:55:53 compute-0 nova_compute[186018]:           <pages unit='KiB' size='1048576'>0</pages>
Jan 05 20:55:53 compute-0 nova_compute[186018]:           <distances>
Jan 05 20:55:53 compute-0 nova_compute[186018]:             <sibling id='0' value='10'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:           </distances>
Jan 05 20:55:53 compute-0 nova_compute[186018]:           <cpus num='8'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:             <cpu id='0' socket_id='0' die_id='0' cluster_id='65535' core_id='0' siblings='0'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:             <cpu id='1' socket_id='1' die_id='1' cluster_id='65535' core_id='0' siblings='1'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:             <cpu id='2' socket_id='2' die_id='2' cluster_id='65535' core_id='0' siblings='2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:             <cpu id='3' socket_id='3' die_id='3' cluster_id='65535' core_id='0' siblings='3'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:             <cpu id='4' socket_id='4' die_id='4' cluster_id='65535' core_id='0' siblings='4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:             <cpu id='5' socket_id='5' die_id='5' cluster_id='65535' core_id='0' siblings='5'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:             <cpu id='6' socket_id='6' die_id='6' cluster_id='65535' core_id='0' siblings='6'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:             <cpu id='7' socket_id='7' die_id='7' cluster_id='65535' core_id='0' siblings='7'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:           </cpus>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         </cell>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </cells>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </topology>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <cache>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <bank id='0' level='2' type='both' size='512' unit='KiB' cpus='0'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <bank id='1' level='2' type='both' size='512' unit='KiB' cpus='1'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <bank id='2' level='2' type='both' size='512' unit='KiB' cpus='2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <bank id='3' level='2' type='both' size='512' unit='KiB' cpus='3'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <bank id='4' level='2' type='both' size='512' unit='KiB' cpus='4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <bank id='5' level='2' type='both' size='512' unit='KiB' cpus='5'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <bank id='6' level='2' type='both' size='512' unit='KiB' cpus='6'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <bank id='7' level='2' type='both' size='512' unit='KiB' cpus='7'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <bank id='0' level='3' type='both' size='16' unit='MiB' cpus='0'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <bank id='1' level='3' type='both' size='16' unit='MiB' cpus='1'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <bank id='2' level='3' type='both' size='16' unit='MiB' cpus='2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <bank id='3' level='3' type='both' size='16' unit='MiB' cpus='3'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <bank id='4' level='3' type='both' size='16' unit='MiB' cpus='4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <bank id='5' level='3' type='both' size='16' unit='MiB' cpus='5'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <bank id='6' level='3' type='both' size='16' unit='MiB' cpus='6'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <bank id='7' level='3' type='both' size='16' unit='MiB' cpus='7'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </cache>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <secmodel>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model>selinux</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <doi>0</doi>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <baselabel type='kvm'>system_u:system_r:svirt_t:s0</baselabel>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <baselabel type='qemu'>system_u:system_r:svirt_tcg_t:s0</baselabel>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </secmodel>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <secmodel>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model>dac</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <doi>0</doi>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <baselabel type='kvm'>+107:+107</baselabel>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <baselabel type='qemu'>+107:+107</baselabel>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </secmodel>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </host>
Jan 05 20:55:53 compute-0 nova_compute[186018]: 
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <guest>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <os_type>hvm</os_type>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <arch name='i686'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <wordsize>32</wordsize>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <domain type='qemu'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <domain type='kvm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </arch>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <features>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <pae/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <nonpae/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <acpi default='on' toggle='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <apic default='on' toggle='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <cpuselection/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <deviceboot/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <disksnapshot default='on' toggle='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <externalSnapshot/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </features>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </guest>
Jan 05 20:55:53 compute-0 nova_compute[186018]: 
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <guest>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <os_type>hvm</os_type>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <arch name='x86_64'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <wordsize>64</wordsize>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <emulator>/usr/libexec/qemu-kvm</emulator>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='240' deprecated='yes'>pc-i440fx-rhel7.6.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine canonical='pc-i440fx-rhel7.6.0' maxCpus='240' deprecated='yes'>pc</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='4096'>pc-q35-rhel9.8.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine canonical='pc-q35-rhel9.8.0' maxCpus='4096'>q35</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='4096'>pc-q35-rhel9.6.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.6.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710'>pc-q35-rhel9.4.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.5.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.3.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel7.6.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.4.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710'>pc-q35-rhel9.2.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.2.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710'>pc-q35-rhel9.0.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.0.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <machine maxCpus='710' deprecated='yes'>pc-q35-rhel8.1.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <domain type='qemu'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <domain type='kvm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </arch>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <features>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <acpi default='on' toggle='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <apic default='on' toggle='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <cpuselection/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <deviceboot/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <disksnapshot default='on' toggle='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <externalSnapshot/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </features>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </guest>
Jan 05 20:55:53 compute-0 nova_compute[186018]: 
Jan 05 20:55:53 compute-0 nova_compute[186018]: </capabilities>
Jan 05 20:55:53 compute-0 nova_compute[186018]: 
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.325 186022 DEBUG nova.virt.libvirt.volume.mount [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Initialising _HostMountState generation 0 host_up /usr/lib/python3.9/site-packages/nova/virt/libvirt/volume/mount.py:130
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.328 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Getting domain capabilities for i686 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.331 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=pc:
Jan 05 20:55:53 compute-0 nova_compute[186018]: <domainCapabilities>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <path>/usr/libexec/qemu-kvm</path>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <domain>kvm</domain>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <arch>i686</arch>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <vcpu max='240'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <iothreads supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <os supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <enum name='firmware'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <loader supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>rom</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pflash</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='readonly'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>yes</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>no</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='secure'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>no</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </loader>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </os>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <cpu>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <mode name='host-passthrough' supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='hostPassthroughMigratable'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>on</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>off</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </mode>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <mode name='maximum' supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='maximumMigratable'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>on</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>off</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </mode>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <mode name='host-model' supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <vendor>AMD</vendor>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='x2apic'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='tsc-deadline'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='hypervisor'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='tsc_adjust'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='spec-ctrl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='stibp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='ssbd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='cmp_legacy'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='overflow-recov'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='succor'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='ibrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='amd-ssbd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='virt-ssbd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='lbrv'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='tsc-scale'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='vmcb-clean'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='flushbyasid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='pause-filter'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='pfthreshold'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='svme-addr-chk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='disable' name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </mode>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <mode name='custom' supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-noTSX'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v5'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cooperlake'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cooperlake-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cooperlake-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Denverton'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mpx'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Denverton-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mpx'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Denverton-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Denverton-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Dhyana-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Genoa'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amd-psfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='auto-ibrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='stibp-always-on'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Genoa-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amd-psfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='auto-ibrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='stibp-always-on'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Milan'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Milan-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Milan-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amd-psfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='stibp-always-on'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Rome'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Rome-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Rome-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Rome-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='GraniteRapids'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='prefetchiti'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='GraniteRapids-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='prefetchiti'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='GraniteRapids-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx10'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx10-128'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx10-256'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx10-512'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='prefetchiti'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-noTSX'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-noTSX'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v5'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v6'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v7'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='IvyBridge'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='IvyBridge-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='IvyBridge-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='IvyBridge-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='KnightsMill'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-4fmaps'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-4vnniw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512er'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512pf'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='KnightsMill-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-4fmaps'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-4vnniw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512er'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512pf'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Opteron_G4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fma4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xop'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Opteron_G4-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fma4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xop'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Opteron_G5'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fma4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tbm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xop'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Opteron_G5-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fma4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tbm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xop'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SapphireRapids'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SapphireRapids-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SapphireRapids-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SapphireRapids-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SierraForest'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-ne-convert'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cmpccxadd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SierraForest-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-ne-convert'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cmpccxadd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v5'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='core-capability'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mpx'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='split-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='core-capability'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mpx'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='split-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='core-capability'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='split-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='core-capability'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='split-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='athlon'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnow'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnowext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='athlon-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnow'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnowext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='core2duo'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='core2duo-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='coreduo'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='coreduo-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='n270'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='n270-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='phenom'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnow'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnowext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='phenom-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnow'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnowext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </mode>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </cpu>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <memoryBacking supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <enum name='sourceType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>file</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>anonymous</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>memfd</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </memoryBacking>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <devices>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <disk supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='diskDevice'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>disk</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>cdrom</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>floppy</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>lun</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='bus'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>ide</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>fdc</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>scsi</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>usb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>sata</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio-transitional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio-non-transitional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </disk>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <graphics supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vnc</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>egl-headless</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>dbus</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </graphics>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <video supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='modelType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vga</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>cirrus</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>none</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>bochs</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>ramfb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </video>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <hostdev supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='mode'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>subsystem</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='startupPolicy'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>default</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>mandatory</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>requisite</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>optional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='subsysType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>usb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pci</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>scsi</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='capsType'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='pciBackend'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </hostdev>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <rng supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio-transitional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio-non-transitional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendModel'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>random</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>egd</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>builtin</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </rng>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <filesystem supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='driverType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>path</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>handle</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtiofs</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </filesystem>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <tpm supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tpm-tis</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tpm-crb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendModel'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>emulator</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>external</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendVersion'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>2.0</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </tpm>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <redirdev supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='bus'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>usb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </redirdev>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <channel supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pty</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>unix</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </channel>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <crypto supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>qemu</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendModel'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>builtin</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </crypto>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <interface supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>default</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>passt</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </interface>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <panic supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>isa</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>hyperv</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </panic>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <console supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>null</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vc</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pty</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>dev</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>file</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pipe</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>stdio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>udp</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tcp</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>unix</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>qemu-vdagent</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>dbus</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </console>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </devices>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <features>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <gic supported='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <vmcoreinfo supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <genid supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <backingStoreInput supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <backup supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <async-teardown supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <ps2 supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <sev supported='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <sgx supported='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <hyperv supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='features'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>relaxed</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vapic</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>spinlocks</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vpindex</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>runtime</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>synic</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>stimer</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>reset</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vendor_id</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>frequencies</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>reenlightenment</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tlbflush</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>ipi</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>avic</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>emsr_bitmap</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>xmm_input</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <defaults>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <spinlocks>4095</spinlocks>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <stimer_direct>on</stimer_direct>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <tlbflush_direct>on</tlbflush_direct>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <tlbflush_extended>on</tlbflush_extended>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </defaults>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </hyperv>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <launchSecurity supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='sectype'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tdx</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </launchSecurity>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </features>
Jan 05 20:55:53 compute-0 nova_compute[186018]: </domainCapabilities>
Jan 05 20:55:53 compute-0 nova_compute[186018]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.339 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Libvirt host hypervisor capabilities for arch=i686 and machine_type=q35:
Jan 05 20:55:53 compute-0 nova_compute[186018]: <domainCapabilities>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <path>/usr/libexec/qemu-kvm</path>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <domain>kvm</domain>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <arch>i686</arch>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <vcpu max='4096'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <iothreads supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <os supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <enum name='firmware'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <loader supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>rom</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pflash</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='readonly'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>yes</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>no</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='secure'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>no</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </loader>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </os>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <cpu>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <mode name='host-passthrough' supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='hostPassthroughMigratable'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>on</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>off</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </mode>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <mode name='maximum' supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='maximumMigratable'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>on</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>off</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </mode>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <mode name='host-model' supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <vendor>AMD</vendor>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='x2apic'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='tsc-deadline'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='hypervisor'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='tsc_adjust'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='spec-ctrl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='stibp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='ssbd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='cmp_legacy'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='overflow-recov'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='succor'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='ibrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='amd-ssbd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='virt-ssbd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='lbrv'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='tsc-scale'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='vmcb-clean'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='flushbyasid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='pause-filter'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='pfthreshold'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='svme-addr-chk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='disable' name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </mode>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <mode name='custom' supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-noTSX'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v5'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cooperlake'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cooperlake-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cooperlake-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Denverton'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mpx'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Denverton-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mpx'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Denverton-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Denverton-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Dhyana-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Genoa'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amd-psfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='auto-ibrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='stibp-always-on'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Genoa-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amd-psfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='auto-ibrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='stibp-always-on'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Milan'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Milan-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Milan-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amd-psfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='stibp-always-on'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Rome'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Rome-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Rome-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Rome-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='GraniteRapids'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='prefetchiti'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='GraniteRapids-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='prefetchiti'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='GraniteRapids-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx10'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx10-128'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx10-256'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx10-512'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='prefetchiti'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-noTSX'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-noTSX'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v5'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v6'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v7'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='IvyBridge'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='IvyBridge-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='IvyBridge-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='IvyBridge-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='KnightsMill'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-4fmaps'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-4vnniw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512er'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512pf'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='KnightsMill-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-4fmaps'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-4vnniw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512er'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512pf'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Opteron_G4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fma4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xop'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Opteron_G4-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fma4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xop'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Opteron_G5'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fma4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tbm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xop'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Opteron_G5-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fma4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tbm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xop'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SapphireRapids'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SapphireRapids-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SapphireRapids-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SapphireRapids-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SierraForest'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-ne-convert'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cmpccxadd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SierraForest-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-ne-convert'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cmpccxadd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v5'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='core-capability'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mpx'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='split-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='core-capability'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mpx'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='split-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='core-capability'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='split-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='core-capability'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='split-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='athlon'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnow'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnowext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='athlon-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnow'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnowext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='core2duo'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='core2duo-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='coreduo'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='coreduo-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='n270'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='n270-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='phenom'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnow'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnowext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='phenom-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnow'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnowext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </mode>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </cpu>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <memoryBacking supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <enum name='sourceType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>file</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>anonymous</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>memfd</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </memoryBacking>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <devices>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <disk supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='diskDevice'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>disk</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>cdrom</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>floppy</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>lun</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='bus'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>fdc</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>scsi</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>usb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>sata</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio-transitional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio-non-transitional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </disk>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <graphics supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vnc</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>egl-headless</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>dbus</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </graphics>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <video supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='modelType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vga</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>cirrus</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>none</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>bochs</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>ramfb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </video>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <hostdev supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='mode'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>subsystem</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='startupPolicy'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>default</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>mandatory</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>requisite</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>optional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='subsysType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>usb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pci</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>scsi</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='capsType'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='pciBackend'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </hostdev>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <rng supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio-transitional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio-non-transitional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendModel'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>random</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>egd</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>builtin</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </rng>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <filesystem supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='driverType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>path</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>handle</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtiofs</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </filesystem>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <tpm supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tpm-tis</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tpm-crb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendModel'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>emulator</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>external</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendVersion'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>2.0</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </tpm>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <redirdev supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='bus'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>usb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </redirdev>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <channel supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pty</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>unix</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </channel>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <crypto supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>qemu</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendModel'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>builtin</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </crypto>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <interface supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>default</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>passt</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </interface>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <panic supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>isa</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>hyperv</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </panic>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <console supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>null</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vc</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pty</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>dev</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>file</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pipe</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>stdio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>udp</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tcp</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>unix</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>qemu-vdagent</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>dbus</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </console>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </devices>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <features>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <gic supported='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <vmcoreinfo supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <genid supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <backingStoreInput supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <backup supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <async-teardown supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <ps2 supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <sev supported='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <sgx supported='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <hyperv supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='features'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>relaxed</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vapic</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>spinlocks</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vpindex</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>runtime</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>synic</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>stimer</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>reset</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vendor_id</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>frequencies</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>reenlightenment</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tlbflush</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>ipi</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>avic</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>emsr_bitmap</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>xmm_input</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <defaults>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <spinlocks>4095</spinlocks>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <stimer_direct>on</stimer_direct>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <tlbflush_direct>on</tlbflush_direct>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <tlbflush_extended>on</tlbflush_extended>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </defaults>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </hyperv>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <launchSecurity supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='sectype'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tdx</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </launchSecurity>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </features>
Jan 05 20:55:53 compute-0 nova_compute[186018]: </domainCapabilities>
Jan 05 20:55:53 compute-0 nova_compute[186018]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.367 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Getting domain capabilities for x86_64 via machine types: {'pc', 'q35'} _get_machine_types /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:952
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.371 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=pc:
Jan 05 20:55:53 compute-0 nova_compute[186018]: <domainCapabilities>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <path>/usr/libexec/qemu-kvm</path>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <domain>kvm</domain>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <machine>pc-i440fx-rhel7.6.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <arch>x86_64</arch>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <vcpu max='240'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <iothreads supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <os supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <enum name='firmware'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <loader supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>/usr/share/OVMF/OVMF_CODE.secboot.fd</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>rom</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pflash</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='readonly'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>yes</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>no</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='secure'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>no</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </loader>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </os>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <cpu>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <mode name='host-passthrough' supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='hostPassthroughMigratable'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>on</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>off</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </mode>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <mode name='maximum' supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='maximumMigratable'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>on</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>off</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </mode>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <mode name='host-model' supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <vendor>AMD</vendor>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='x2apic'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='tsc-deadline'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='hypervisor'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='tsc_adjust'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='spec-ctrl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='stibp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='ssbd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='cmp_legacy'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='overflow-recov'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='succor'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='ibrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='amd-ssbd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='virt-ssbd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='lbrv'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='tsc-scale'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='vmcb-clean'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='flushbyasid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='pause-filter'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='pfthreshold'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='svme-addr-chk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='disable' name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </mode>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <mode name='custom' supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-noTSX'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v5'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cooperlake'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cooperlake-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cooperlake-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Denverton'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mpx'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Denverton-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mpx'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Denverton-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Denverton-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Dhyana-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Genoa'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amd-psfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='auto-ibrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='stibp-always-on'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Genoa-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amd-psfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='auto-ibrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='stibp-always-on'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Milan'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Milan-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Milan-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amd-psfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='stibp-always-on'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Rome'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Rome-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Rome-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Rome-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='GraniteRapids'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='prefetchiti'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='GraniteRapids-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='prefetchiti'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='GraniteRapids-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx10'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx10-128'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx10-256'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx10-512'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='prefetchiti'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-noTSX'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-noTSX'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v5'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v6'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v7'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='IvyBridge'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='IvyBridge-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='IvyBridge-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='IvyBridge-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='KnightsMill'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-4fmaps'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-4vnniw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512er'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512pf'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='KnightsMill-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-4fmaps'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-4vnniw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512er'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512pf'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Opteron_G4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fma4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xop'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Opteron_G4-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fma4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xop'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Opteron_G5'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fma4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tbm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xop'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Opteron_G5-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fma4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tbm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xop'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SapphireRapids'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SapphireRapids-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SapphireRapids-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SapphireRapids-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SierraForest'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-ne-convert'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cmpccxadd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SierraForest-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-ne-convert'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cmpccxadd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v5'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='core-capability'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mpx'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='split-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='core-capability'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mpx'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='split-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='core-capability'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='split-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='core-capability'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='split-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='athlon'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnow'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnowext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='athlon-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnow'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnowext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='core2duo'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='core2duo-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='coreduo'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='coreduo-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='n270'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='n270-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='phenom'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnow'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnowext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='phenom-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnow'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnowext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </mode>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </cpu>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <memoryBacking supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <enum name='sourceType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>file</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>anonymous</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>memfd</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </memoryBacking>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <devices>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <disk supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='diskDevice'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>disk</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>cdrom</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>floppy</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>lun</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='bus'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>ide</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>fdc</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>scsi</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>usb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>sata</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio-transitional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio-non-transitional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </disk>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <graphics supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vnc</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>egl-headless</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>dbus</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </graphics>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <video supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='modelType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vga</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>cirrus</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>none</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>bochs</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>ramfb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </video>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <hostdev supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='mode'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>subsystem</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='startupPolicy'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>default</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>mandatory</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>requisite</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>optional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='subsysType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>usb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pci</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>scsi</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='capsType'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='pciBackend'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </hostdev>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <rng supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio-transitional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio-non-transitional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendModel'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>random</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>egd</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>builtin</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </rng>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <filesystem supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='driverType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>path</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>handle</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtiofs</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </filesystem>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <tpm supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tpm-tis</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tpm-crb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendModel'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>emulator</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>external</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendVersion'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>2.0</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </tpm>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <redirdev supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='bus'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>usb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </redirdev>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <channel supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pty</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>unix</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </channel>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <crypto supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>qemu</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendModel'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>builtin</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </crypto>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <interface supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>default</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>passt</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </interface>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <panic supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>isa</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>hyperv</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </panic>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <console supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>null</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vc</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pty</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>dev</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>file</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pipe</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>stdio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>udp</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tcp</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>unix</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>qemu-vdagent</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>dbus</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </console>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </devices>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <features>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <gic supported='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <vmcoreinfo supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <genid supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <backingStoreInput supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <backup supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <async-teardown supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <ps2 supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <sev supported='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <sgx supported='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <hyperv supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='features'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>relaxed</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vapic</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>spinlocks</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vpindex</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>runtime</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>synic</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>stimer</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>reset</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vendor_id</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>frequencies</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>reenlightenment</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tlbflush</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>ipi</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>avic</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>emsr_bitmap</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>xmm_input</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <defaults>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <spinlocks>4095</spinlocks>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <stimer_direct>on</stimer_direct>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <tlbflush_direct>on</tlbflush_direct>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <tlbflush_extended>on</tlbflush_extended>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </defaults>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </hyperv>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <launchSecurity supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='sectype'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tdx</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </launchSecurity>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </features>
Jan 05 20:55:53 compute-0 nova_compute[186018]: </domainCapabilities>
Jan 05 20:55:53 compute-0 nova_compute[186018]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.440 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Libvirt host hypervisor capabilities for arch=x86_64 and machine_type=q35:
Jan 05 20:55:53 compute-0 nova_compute[186018]: <domainCapabilities>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <path>/usr/libexec/qemu-kvm</path>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <domain>kvm</domain>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <machine>pc-q35-rhel9.8.0</machine>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <arch>x86_64</arch>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <vcpu max='4096'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <iothreads supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <os supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <enum name='firmware'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>efi</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <loader supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.secboot.fd</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>/usr/share/edk2/ovmf/OVMF_CODE.fd</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>/usr/share/edk2/ovmf/OVMF.amdsev.fd</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>/usr/share/edk2/ovmf/OVMF.inteltdx.secboot.fd</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>rom</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pflash</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='readonly'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>yes</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>no</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='secure'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>yes</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>no</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </loader>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </os>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <cpu>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <mode name='host-passthrough' supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='hostPassthroughMigratable'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>on</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>off</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </mode>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <mode name='maximum' supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='maximumMigratable'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>on</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>off</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </mode>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <mode name='host-model' supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model fallback='forbid'>EPYC-Rome</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <vendor>AMD</vendor>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <maxphysaddr mode='passthrough' limit='40'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='x2apic'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='tsc-deadline'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='hypervisor'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='tsc_adjust'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='spec-ctrl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='stibp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='ssbd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='cmp_legacy'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='overflow-recov'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='succor'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='ibrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='amd-ssbd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='virt-ssbd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='lbrv'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='tsc-scale'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='vmcb-clean'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='flushbyasid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='pause-filter'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='pfthreshold'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='svme-addr-chk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='require' name='lfence-always-serializing'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <feature policy='disable' name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </mode>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <mode name='custom' supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='486-v1'>486</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>486-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v1'>Broadwell</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v3'>Broadwell-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v2'>Broadwell-noTSX</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-noTSX'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Broadwell-v4'>Broadwell-noTSX-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-noTSX-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Broadwell-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Broadwell-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Broadwell-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Broadwell-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Broadwell-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v1'>Cascadelake-Server</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Cascadelake-Server-v3'>Cascadelake-Server-noTSX</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-noTSX'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cascadelake-Server-v5</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cascadelake-Server-v5'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Conroe-v1'>Conroe</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='Intel'>Conroe-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Cooperlake-v1'>Cooperlake</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cooperlake'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cooperlake-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cooperlake-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Cooperlake-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Cooperlake-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Denverton-v1'>Denverton</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Denverton'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mpx'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Denverton-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Denverton-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mpx'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Denverton-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Denverton-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Denverton-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Denverton-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Hygon' canonical='Dhyana-v1'>Dhyana</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Hygon'>Dhyana-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Hygon'>Dhyana-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Dhyana-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD' canonical='EPYC-v1'>EPYC</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='EPYC-Genoa-v1'>EPYC-Genoa</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Genoa'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amd-psfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='auto-ibrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='stibp-always-on'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Genoa-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Genoa-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amd-psfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='auto-ibrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='stibp-always-on'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD' canonical='EPYC-v2'>EPYC-IBPB</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='EPYC-Milan-v1'>EPYC-Milan</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Milan'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Milan-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Milan-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Milan-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amd-psfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='no-nested-data-bp'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='null-sel-clr-base'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='stibp-always-on'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='EPYC-Rome-v1'>EPYC-Rome</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Rome'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Rome-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Rome-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-Rome-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-Rome-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD'>EPYC-Rome-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD'>EPYC-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='AMD'>EPYC-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>EPYC-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='EPYC-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='GraniteRapids-v1'>GraniteRapids</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='GraniteRapids'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='prefetchiti'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>GraniteRapids-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='GraniteRapids-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='prefetchiti'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>GraniteRapids-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='GraniteRapids-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx10'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx10-128'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx10-256'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx10-512'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='prefetchiti'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Haswell-v1'>Haswell</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Haswell-v3'>Haswell-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Haswell-v2'>Haswell-noTSX</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-noTSX'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Haswell-v4'>Haswell-noTSX-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-noTSX-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Haswell-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Haswell-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Haswell-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Haswell-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Haswell-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v1'>Icelake-Server</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Icelake-Server-v2'>Icelake-Server-noTSX</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-noTSX'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v5</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v5'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v6</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v6'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Icelake-Server-v7</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Icelake-Server-v7'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v1'>IvyBridge</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='IvyBridge'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='IvyBridge-v2'>IvyBridge-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='IvyBridge-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>IvyBridge-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='IvyBridge-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>IvyBridge-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='IvyBridge-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='KnightsMill-v1'>KnightsMill</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='KnightsMill'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-4fmaps'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-4vnniw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512er'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512pf'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>KnightsMill-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='KnightsMill-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-4fmaps'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-4vnniw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512er'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512pf'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v1'>Nehalem</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='Nehalem-v2'>Nehalem-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>Nehalem-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>Nehalem-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G1-v1'>Opteron_G1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G1-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G2-v1'>Opteron_G2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G2-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD' canonical='Opteron_G3-v1'>Opteron_G3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='AMD'>Opteron_G3-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='Opteron_G4-v1'>Opteron_G4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Opteron_G4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fma4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xop'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>Opteron_G4-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Opteron_G4-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fma4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xop'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD' canonical='Opteron_G5-v1'>Opteron_G5</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Opteron_G5'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fma4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tbm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xop'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='AMD'>Opteron_G5-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Opteron_G5-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fma4'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tbm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xop'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='Intel' canonical='Penryn-v1'>Penryn</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='Intel'>Penryn-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v1'>SandyBridge</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='SandyBridge-v2'>SandyBridge-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>SandyBridge-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>SandyBridge-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='SapphireRapids-v1'>SapphireRapids</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SapphireRapids'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>SapphireRapids-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SapphireRapids-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>SapphireRapids-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SapphireRapids-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>SapphireRapids-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SapphireRapids-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='amx-tile'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-bf16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-fp16'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512-vpopcntdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bitalg'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vbmi2'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrc'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fzrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='la57'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='taa-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='tsx-ldtrk'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xfd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='SierraForest-v1'>SierraForest</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SierraForest'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-ne-convert'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cmpccxadd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>SierraForest-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='SierraForest-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-ifma'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-ne-convert'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx-vnni-int8'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='bus-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cmpccxadd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fbsdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='fsrs'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ibrs-all'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mcdt-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pbrsb-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='psdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='sbdr-ssdp-no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='serialize'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vaes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='vpclmulqdq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v1'>Skylake-Client</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v2'>Skylake-Client-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Client-v3'>Skylake-Client-noTSX-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-noTSX-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Client-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Client-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Client-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Client-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Client-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v1'>Skylake-Server</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v2'>Skylake-Server-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Skylake-Server-v3'>Skylake-Server-noTSX-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-noTSX-IBRS'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='hle'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='rtm'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Skylake-Server-v5</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Skylake-Server-v5'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512bw'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512cd'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512dq'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512f'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='avx512vl'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='invpcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pcid'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='pku'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel' canonical='Snowridge-v1'>Snowridge</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='core-capability'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mpx'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='split-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Snowridge-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='core-capability'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='mpx'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='split-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Snowridge-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge-v2'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='core-capability'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='split-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Snowridge-v3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge-v3'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='core-capability'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='split-lock-detect'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' vendor='Intel'>Snowridge-v4</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='Snowridge-v4'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='cldemote'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='erms'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='gfni'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdir64b'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='movdiri'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='xsaves'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='Westmere-v1'>Westmere</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel' canonical='Westmere-v2'>Westmere-IBRS</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>Westmere-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' vendor='Intel'>Westmere-v2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='athlon-v1'>athlon</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='athlon'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnow'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnowext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='AMD'>athlon-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='athlon-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnow'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnowext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='core2duo-v1'>core2duo</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='core2duo'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel'>core2duo-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='core2duo-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='coreduo-v1'>coreduo</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='coreduo'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel'>coreduo-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='coreduo-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm32-v1'>kvm32</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm32-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='kvm64-v1'>kvm64</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>kvm64-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel' canonical='n270-v1'>n270</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='n270'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='Intel'>n270-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='n270-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='ss'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium-v1'>pentium</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium2-v1'>pentium2</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium2-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='pentium3-v1'>pentium3</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>pentium3-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='AMD' canonical='phenom-v1'>phenom</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='phenom'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnow'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnowext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='no' deprecated='yes' vendor='AMD'>phenom-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <blockers model='phenom-v1'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnow'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <feature name='3dnowext'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </blockers>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu32-v1'>qemu32</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu32-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown' canonical='qemu64-v1'>qemu64</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <model usable='yes' deprecated='yes' vendor='unknown'>qemu64-v1</model>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </mode>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </cpu>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <memoryBacking supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <enum name='sourceType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>file</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>anonymous</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <value>memfd</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </memoryBacking>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <devices>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <disk supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='diskDevice'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>disk</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>cdrom</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>floppy</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>lun</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='bus'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>fdc</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>scsi</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>usb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>sata</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio-transitional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio-non-transitional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </disk>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <graphics supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vnc</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>egl-headless</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>dbus</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </graphics>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <video supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='modelType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vga</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>cirrus</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>none</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>bochs</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>ramfb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </video>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <hostdev supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='mode'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>subsystem</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='startupPolicy'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>default</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>mandatory</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>requisite</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>optional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='subsysType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>usb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pci</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>scsi</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='capsType'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='pciBackend'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </hostdev>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <rng supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio-transitional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtio-non-transitional</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendModel'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>random</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>egd</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>builtin</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </rng>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <filesystem supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='driverType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>path</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>handle</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>virtiofs</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </filesystem>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <tpm supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tpm-tis</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tpm-crb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendModel'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>emulator</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>external</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendVersion'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>2.0</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </tpm>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <redirdev supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='bus'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>usb</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </redirdev>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <channel supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pty</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>unix</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </channel>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <crypto supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>qemu</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendModel'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>builtin</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </crypto>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <interface supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='backendType'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>default</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>passt</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </interface>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <panic supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='model'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>isa</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>hyperv</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </panic>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <console supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='type'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>null</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vc</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pty</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>dev</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>file</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>pipe</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>stdio</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>udp</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tcp</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>unix</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>qemu-vdagent</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>dbus</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </console>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </devices>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   <features>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <gic supported='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <vmcoreinfo supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <genid supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <backingStoreInput supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <backup supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <async-teardown supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <ps2 supported='yes'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <sev supported='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <sgx supported='no'/>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <hyperv supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='features'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>relaxed</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vapic</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>spinlocks</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vpindex</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>runtime</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>synic</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>stimer</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>reset</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>vendor_id</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>frequencies</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>reenlightenment</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tlbflush</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>ipi</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>avic</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>emsr_bitmap</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>xmm_input</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <defaults>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <spinlocks>4095</spinlocks>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <stimer_direct>on</stimer_direct>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <tlbflush_direct>on</tlbflush_direct>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <tlbflush_extended>on</tlbflush_extended>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <vendor_id>Linux KVM Hv</vendor_id>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </defaults>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </hyperv>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     <launchSecurity supported='yes'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       <enum name='sectype'>
Jan 05 20:55:53 compute-0 nova_compute[186018]:         <value>tdx</value>
Jan 05 20:55:53 compute-0 nova_compute[186018]:       </enum>
Jan 05 20:55:53 compute-0 nova_compute[186018]:     </launchSecurity>
Jan 05 20:55:53 compute-0 nova_compute[186018]:   </features>
Jan 05 20:55:53 compute-0 nova_compute[186018]: </domainCapabilities>
Jan 05 20:55:53 compute-0 nova_compute[186018]:  _get_domain_capabilities /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1037
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.510 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.511 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.511 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Checking secure boot support for host arch (x86_64) supports_secure_boot /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1782
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.511 186022 INFO nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Secure Boot support detected
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.515 186022 INFO nova.virt.libvirt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.515 186022 INFO nova.virt.libvirt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] The live_migration_permit_post_copy is set to True and post copy live migration is available so auto-converge will not be in use.
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.529 186022 DEBUG nova.virt.libvirt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Enabling emulated TPM support _check_vtpm_support /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:1097
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.552 186022 INFO nova.virt.node [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Determined node identity 98d67ab0-e613-4c26-9eaa-22cf91b060a7 from /var/lib/nova/compute_id
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.572 186022 WARNING nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Compute nodes ['98d67ab0-e613-4c26-9eaa-22cf91b060a7'] for host compute-0.ctlplane.example.com were not found in the database. If this is the first time this service is starting on this host, then you can ignore this warning.
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.604 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Looking for unclaimed instances stuck in BUILDING status for nodes managed by this host
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.655 186022 WARNING nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] No compute node record found for host compute-0.ctlplane.example.com. If this is the first time this service is starting on this host, then you can ignore this warning.: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host compute-0.ctlplane.example.com could not be found.
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.656 186022 DEBUG oslo_concurrency.lockutils [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.656 186022 DEBUG oslo_concurrency.lockutils [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.656 186022 DEBUG oslo_concurrency.lockutils [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.657 186022 DEBUG nova.compute.resource_tracker [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.860 186022 WARNING nova.virt.libvirt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.862 186022 DEBUG nova.compute.resource_tracker [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=6027MB free_disk=72.64702606201172GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.863 186022 DEBUG oslo_concurrency.lockutils [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.864 186022 DEBUG oslo_concurrency.lockutils [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.880 186022 WARNING nova.compute.resource_tracker [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] No compute node record for compute-0.ctlplane.example.com:98d67ab0-e613-4c26-9eaa-22cf91b060a7: nova.exception_Remote.ComputeHostNotFound_Remote: Compute host 98d67ab0-e613-4c26-9eaa-22cf91b060a7 could not be found.
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.905 186022 INFO nova.compute.resource_tracker [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Compute node record created for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com with uuid: 98d67ab0-e613-4c26-9eaa-22cf91b060a7
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.985 186022 DEBUG nova.compute.resource_tracker [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 20:55:53 compute-0 nova_compute[186018]: 2026-01-05 20:55:53.985 186022 DEBUG nova.compute.resource_tracker [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 20:55:55 compute-0 nova_compute[186018]: 2026-01-05 20:55:55.109 186022 INFO nova.scheduler.client.report [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [req-68a32d1e-04ee-4756-8e87-a47c1532fd8b] Created resource provider record via placement API for resource provider with UUID 98d67ab0-e613-4c26-9eaa-22cf91b060a7 and name compute-0.ctlplane.example.com.
Jan 05 20:55:55 compute-0 nova_compute[186018]: 2026-01-05 20:55:55.736 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] /sys/module/kvm_amd/parameters/sev contains [N
Jan 05 20:55:55 compute-0 nova_compute[186018]: ] _kernel_supports_amd_sev /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1803
Jan 05 20:55:55 compute-0 nova_compute[186018]: 2026-01-05 20:55:55.736 186022 INFO nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] kernel doesn't support AMD SEV
Jan 05 20:55:55 compute-0 nova_compute[186018]: 2026-01-05 20:55:55.738 186022 DEBUG nova.compute.provider_tree [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Updating inventory in ProviderTree for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 20:55:55 compute-0 nova_compute[186018]: 2026-01-05 20:55:55.739 186022 DEBUG nova.virt.libvirt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 20:55:55 compute-0 nova_compute[186018]: 2026-01-05 20:55:55.806 186022 DEBUG nova.scheduler.client.report [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Updated inventory for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with generation 0 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 0}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 05 20:55:55 compute-0 nova_compute[186018]: 2026-01-05 20:55:55.807 186022 DEBUG nova.compute.provider_tree [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Updating resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 generation from 0 to 1 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 05 20:55:55 compute-0 nova_compute[186018]: 2026-01-05 20:55:55.807 186022 DEBUG nova.compute.provider_tree [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Updating inventory in ProviderTree for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 20:55:55 compute-0 nova_compute[186018]: 2026-01-05 20:55:55.915 186022 DEBUG nova.compute.provider_tree [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Updating resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 generation from 1 to 2 during operation: update_traits _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 05 20:55:55 compute-0 nova_compute[186018]: 2026-01-05 20:55:55.948 186022 DEBUG nova.compute.resource_tracker [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 20:55:55 compute-0 nova_compute[186018]: 2026-01-05 20:55:55.948 186022 DEBUG oslo_concurrency.lockutils [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.085s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:55:55 compute-0 nova_compute[186018]: 2026-01-05 20:55:55.949 186022 DEBUG nova.service [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Creating RPC server for service compute start /usr/lib/python3.9/site-packages/nova/service.py:182
Jan 05 20:55:56 compute-0 nova_compute[186018]: 2026-01-05 20:55:56.032 186022 DEBUG nova.service [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Join ServiceGroup membership for this service compute start /usr/lib/python3.9/site-packages/nova/service.py:199
Jan 05 20:55:56 compute-0 nova_compute[186018]: 2026-01-05 20:55:56.033 186022 DEBUG nova.servicegroup.drivers.db [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] DB_Driver: join new ServiceGroup member compute-0.ctlplane.example.com to the compute group, service = <Service: host=compute-0.ctlplane.example.com, binary=nova-compute, manager_class_name=nova.compute.manager.ComputeManager> join /usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py:44
Jan 05 20:55:58 compute-0 nova_compute[186018]: 2026-01-05 20:55:58.035 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:55:58 compute-0 nova_compute[186018]: 2026-01-05 20:55:58.066 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:55:58 compute-0 sshd-session[186315]: Accepted publickey for zuul from 192.168.122.30 port 54272 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:55:58 compute-0 systemd-logind[788]: New session 25 of user zuul.
Jan 05 20:55:58 compute-0 systemd[1]: Started Session 25 of User zuul.
Jan 05 20:55:58 compute-0 sshd-session[186315]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:55:59 compute-0 python3.9[186468]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 20:56:00 compute-0 sudo[186622]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oukggqpdwuuwpnaazrrnueehwmhunigx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646560.2336924-36-92397443834332/AnsiballZ_systemd_service.py'
Jan 05 20:56:00 compute-0 sudo[186622]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:01 compute-0 python3.9[186624]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 20:56:01 compute-0 systemd[1]: Reloading.
Jan 05 20:56:01 compute-0 systemd-rc-local-generator[186651]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:56:01 compute-0 systemd-sysv-generator[186654]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:56:01 compute-0 sudo[186622]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:02 compute-0 python3.9[186809]: ansible-ansible.builtin.service_facts Invoked
Jan 05 20:56:02 compute-0 network[186826]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 05 20:56:02 compute-0 network[186827]: 'network-scripts' will be removed from distribution in near future.
Jan 05 20:56:02 compute-0 network[186828]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 05 20:56:08 compute-0 sudo[187098]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hrgwnuckkbiwjpvojbxxslfquldrnmab ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646568.2449825-55-30561108978946/AnsiballZ_systemd_service.py'
Jan 05 20:56:08 compute-0 sudo[187098]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:09 compute-0 python3.9[187100]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_compute.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:56:09 compute-0 sudo[187098]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:10 compute-0 sudo[187251]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eadlmedeghyvxqcdhrzfipifpdjbfnfp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646569.504817-65-246377933039688/AnsiballZ_file.py'
Jan 05 20:56:10 compute-0 sudo[187251]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:10 compute-0 python3.9[187253]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:56:10 compute-0 sudo[187251]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:10 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 20:56:10 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 20:56:10 compute-0 rsyslogd[1006]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 20:56:10 compute-0 sudo[187404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-smszksdstxbfbhkfygfqndsarapnmyon ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646570.5812628-73-249761218892024/AnsiballZ_file.py'
Jan 05 20:56:10 compute-0 sudo[187404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:11 compute-0 python3.9[187406]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_compute.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:56:11 compute-0 sudo[187404]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:12 compute-0 sudo[187556]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rxrsykvbkiuswalxuukhidediposnmer ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646571.6038146-82-155770631971011/AnsiballZ_command.py'
Jan 05 20:56:12 compute-0 sudo[187556]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:12 compute-0 python3.9[187558]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:56:12 compute-0 sudo[187556]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:13 compute-0 python3.9[187710]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 05 20:56:14 compute-0 sudo[187860]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsxgycqwllaakelrnctsittvxzivgsvr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646573.8247938-100-229754452753445/AnsiballZ_systemd_service.py'
Jan 05 20:56:14 compute-0 sudo[187860]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:14 compute-0 python3.9[187862]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 20:56:14 compute-0 systemd[1]: Reloading.
Jan 05 20:56:14 compute-0 systemd-sysv-generator[187895]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:56:14 compute-0 systemd-rc-local-generator[187891]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:56:14 compute-0 sudo[187860]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:15 compute-0 podman[187899]: 2026-01-05 20:56:15.052686777 +0000 UTC m=+0.154723817 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 05 20:56:15 compute-0 sudo[188075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixnqlhtxeygtvyamdfosixprckswmlao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646575.1454637-108-142728773722421/AnsiballZ_command.py'
Jan 05 20:56:15 compute-0 sudo[188075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:15 compute-0 python3.9[188077]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_compute.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:56:15 compute-0 sudo[188075]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:16 compute-0 sudo[188228]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hsfdleakumjwedgmtyaygtdjmjvpdhqq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646576.0997517-117-111036114557705/AnsiballZ_file.py'
Jan 05 20:56:16 compute-0 sudo[188228]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:16 compute-0 python3.9[188230]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:56:16 compute-0 sudo[188228]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:17 compute-0 python3.9[188380]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:56:18 compute-0 sudo[188532]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qiivfmcoyshioddofkrndoqjgormhtnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646578.029641-133-8931529257194/AnsiballZ_group.py'
Jan 05 20:56:18 compute-0 sudo[188532]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:18 compute-0 python3.9[188534]: ansible-ansible.builtin.group Invoked with name=libvirt state=present force=False system=False local=False non_unique=False gid=None gid_min=None gid_max=None
Jan 05 20:56:18 compute-0 sudo[188532]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:19 compute-0 sudo[188684]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzqshfmlobioweraclxfwulzixvbdjuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646579.1652784-144-82614206967119/AnsiballZ_getent.py'
Jan 05 20:56:19 compute-0 sudo[188684]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:19 compute-0 python3.9[188686]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 05 20:56:20 compute-0 sudo[188684]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:20 compute-0 podman[188688]: 2026-01-05 20:56:20.1239094 +0000 UTC m=+0.092073457 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 05 20:56:20 compute-0 sudo[188858]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lszmpmtymsjevvhyevttujwuwmxvyymb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646580.2490265-152-119407723185812/AnsiballZ_group.py'
Jan 05 20:56:20 compute-0 sudo[188858]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:20 compute-0 python3.9[188860]: ansible-ansible.builtin.group Invoked with gid=42405 name=ceilometer state=present force=False system=False local=False non_unique=False gid_min=None gid_max=None
Jan 05 20:56:20 compute-0 groupadd[188861]: group added to /etc/group: name=ceilometer, GID=42405
Jan 05 20:56:20 compute-0 groupadd[188861]: group added to /etc/gshadow: name=ceilometer
Jan 05 20:56:20 compute-0 groupadd[188861]: new group: name=ceilometer, GID=42405
Jan 05 20:56:20 compute-0 sudo[188858]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:21 compute-0 sudo[189016]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nhsxmvmfhlqxtvpyoupbnhwzowfcogqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646581.2042422-160-116626083701906/AnsiballZ_user.py'
Jan 05 20:56:21 compute-0 sudo[189016]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:22 compute-0 python3.9[189018]: ansible-ansible.builtin.user Invoked with comment=ceilometer user group=ceilometer groups=['libvirt'] name=ceilometer shell=/sbin/nologin state=present uid=42405 non_unique=False force=False remove=False create_home=True system=False move_home=False append=False ssh_key_bits=0 ssh_key_type=rsa ssh_key_comment=ansible-generated on compute-0 update_password=always home=None password=NOT_LOGGING_PARAMETER login_class=None password_expire_max=None password_expire_min=None password_expire_warn=None hidden=None seuser=None skeleton=None generate_ssh_key=None ssh_key_file=None ssh_key_passphrase=NOT_LOGGING_PARAMETER expires=None password_lock=None local=None profile=None authorization=None role=None umask=None password_expire_account_disable=None uid_min=None uid_max=None
Jan 05 20:56:22 compute-0 useradd[189020]: new user: name=ceilometer, UID=42405, GID=42405, home=/home/ceilometer, shell=/sbin/nologin, from=/dev/pts/0
Jan 05 20:56:22 compute-0 useradd[189020]: add 'ceilometer' to group 'libvirt'
Jan 05 20:56:22 compute-0 useradd[189020]: add 'ceilometer' to shadow group 'libvirt'
Jan 05 20:56:22 compute-0 sudo[189016]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:23 compute-0 python3.9[189176]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:56:24 compute-0 python3.9[189297]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767646582.980503-186-47190580014358/.source.conf _original_basename=ceilometer.conf follow=False checksum=f74f01c63e6cdeca5458ef9aff2a1db5d6a4e4b9 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:56:25 compute-0 python3.9[189447]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:56:25 compute-0 python3.9[189568]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767646584.578348-186-74860681299588/.source.yaml _original_basename=polling.yaml follow=False checksum=6c8680a286285f2e0ef9fa528ca754765e5ed0e5 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:56:26 compute-0 python3.9[189718]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:56:27 compute-0 python3.9[189839]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767646586.230847-186-236820365307841/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:56:28 compute-0 python3.9[189989]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:56:28 compute-0 python3.9[190141]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:56:29 compute-0 python3.9[190293]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:56:30 compute-0 python3.9[190414]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646589.1205778-245-153951840867588/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:56:31 compute-0 python3.9[190564]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/openstack_network_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:56:31 compute-0 python3.9[190685]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/openstack_network_exporter.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646590.7438972-245-246523554021493/.source.yaml follow=False _original_basename=openstack_network_exporter.yaml.j2 checksum=87dede51a10e22722618c1900db75cb764463d91 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:56:32 compute-0 python3.9[190835]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:56:33 compute-0 python3.9[190956]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646592.2322164-274-48335443643540/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:56:34 compute-0 python3.9[191106]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/node_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:56:35 compute-0 python3.9[191227]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/node_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646593.8446708-290-54769160517087/.source.yaml _original_basename=node_exporter.yaml follow=False checksum=81d906d3e1e8c4f8367276f5d3a67b80ca7e989e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:56:36 compute-0 python3.9[191377]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/podman_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:56:36 compute-0 python3.9[191498]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/podman_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646595.3914645-305-195489374858752/.source.yaml _original_basename=podman_exporter.yaml follow=False checksum=7ccb5eca2ff1dc337c3f3ecbbff5245af7149c47 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:56:37 compute-0 python3.9[191648]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:56:38 compute-0 python3.9[191769]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646596.953925-320-184539810370401/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:56:38 compute-0 sudo[191919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zljknkhldjinfmwmhxpqctrkiaztarep ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646598.50011-335-129604280205595/AnsiballZ_file.py'
Jan 05 20:56:38 compute-0 sudo[191919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:39 compute-0 python3.9[191921]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:56:39 compute-0 sudo[191919]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:39 compute-0 sudo[192071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rgoydwovklhtnnxguyeuzqggjlezyark ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646599.3252494-343-17457079318156/AnsiballZ_file.py'
Jan 05 20:56:39 compute-0 sudo[192071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:39 compute-0 python3.9[192073]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:56:39 compute-0 sudo[192071]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:40 compute-0 python3.9[192223]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:56:41 compute-0 python3.9[192375]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:56:42 compute-0 python3.9[192527]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:56:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:56:42.822 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:56:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:56:42.823 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:56:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:56:42.823 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:56:43 compute-0 sudo[192679]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-amrphrtwwthqkalacanyxmcizgteyrvb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646602.630858-375-11781025735337/AnsiballZ_file.py'
Jan 05 20:56:43 compute-0 sudo[192679]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:43 compute-0 python3.9[192681]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:56:43 compute-0 sudo[192679]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:43 compute-0 sudo[192831]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjoslpuusonpdbduccabdlgqngnjemwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646603.5125113-383-167321437094069/AnsiballZ_systemd_service.py'
Jan 05 20:56:43 compute-0 sudo[192831]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:44 compute-0 python3.9[192833]: ansible-ansible.builtin.systemd_service Invoked with enabled=True name=podman.socket state=started daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:56:44 compute-0 systemd[1]: Reloading.
Jan 05 20:56:44 compute-0 systemd-rc-local-generator[192862]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:56:44 compute-0 systemd-sysv-generator[192868]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:56:44 compute-0 systemd[1]: Listening on Podman API Socket.
Jan 05 20:56:44 compute-0 sudo[192831]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:45 compute-0 sudo[193033]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hfxvkjgvvammljlvuytypcjdckmquqyr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646605.060642-392-208470379470544/AnsiballZ_stat.py'
Jan 05 20:56:45 compute-0 sudo[193033]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:45 compute-0 podman[192996]: 2026-01-05 20:56:45.576260897 +0000 UTC m=+0.128594965 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.license=GPLv2, io.buildah.version=1.41.3)
Jan 05 20:56:45 compute-0 python3.9[193041]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:56:45 compute-0 sudo[193033]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:46 compute-0 sudo[193171]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uoyrputerngrqsqllnzholsjmpzodzxq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646605.060642-392-208470379470544/AnsiballZ_copy.py'
Jan 05 20:56:46 compute-0 sudo[193171]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:46 compute-0 python3.9[193173]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646605.060642-392-208470379470544/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:56:46 compute-0 sudo[193171]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:46 compute-0 sudo[193247]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fzpuagtfnrijkkhqlltttpepgqzobxxa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646605.060642-392-208470379470544/AnsiballZ_stat.py'
Jan 05 20:56:46 compute-0 sudo[193247]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:46 compute-0 python3.9[193249]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_compute/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:56:46 compute-0 sudo[193247]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:47 compute-0 sudo[193370]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rkoorkwfpspwkcifbsuztixtwfyfvzae ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646605.060642-392-208470379470544/AnsiballZ_copy.py'
Jan 05 20:56:47 compute-0 sudo[193370]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:47 compute-0 python3.9[193372]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_compute/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646605.060642-392-208470379470544/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:56:47 compute-0 sudo[193370]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:48 compute-0 sudo[193522]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aowexjeailrutiqqdfbxixoucwjqkwwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646608.2250388-424-29101242258927/AnsiballZ_file.py'
Jan 05 20:56:48 compute-0 sudo[193522]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:48 compute-0 python3.9[193524]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:56:48 compute-0 sudo[193522]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:49 compute-0 sudo[193674]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhjghyzvksvcyusmxuqvpjrfmylgldes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646609.1609373-432-32951580251677/AnsiballZ_file.py'
Jan 05 20:56:49 compute-0 sudo[193674]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:49 compute-0 python3.9[193676]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:56:49 compute-0 sudo[193674]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:50 compute-0 sudo[193839]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ecqcipmzzgfmaealmqamlcojiaqmwffx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646610.0906684-440-275399752066005/AnsiballZ_stat.py'
Jan 05 20:56:50 compute-0 sudo[193839]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:50 compute-0 podman[193800]: 2026-01-05 20:56:50.559905813 +0000 UTC m=+0.097543013 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 20:56:50 compute-0 python3.9[193845]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:56:50 compute-0 sudo[193839]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:51 compute-0 sudo[193968]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nnzsqwgjwhzimxzysqeadvqimnbqsbsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646610.0906684-440-275399752066005/AnsiballZ_copy.py'
Jan 05 20:56:51 compute-0 sudo[193968]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:51 compute-0 python3.9[193970]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646610.0906684-440-275399752066005/.source.json _original_basename=.ihy3_5sb follow=False checksum=ce2b0c83293a970bafffa087afa083dd7c93a79c backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:56:51 compute-0 sudo[193968]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.464 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.465 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.465 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.465 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.481 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.482 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.482 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.483 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.483 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.484 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.484 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.484 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.485 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.516 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.516 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.517 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.517 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 20:56:52 compute-0 python3.9[194120]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.793 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.795 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5999MB free_disk=72.64703750610352GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.795 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.796 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.887 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.887 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.935 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.955 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.958 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 20:56:52 compute-0 nova_compute[186018]: 2026-01-05 20:56:52.959 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:56:55 compute-0 sudo[194541]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eothmmnbvucdjkpprvoimiaruzjlqqtd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646614.780465-480-269275887978357/AnsiballZ_container_config_data.py'
Jan 05 20:56:55 compute-0 sudo[194541]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:55 compute-0 python3.9[194543]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_pattern=*.json debug=False
Jan 05 20:56:55 compute-0 sudo[194541]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:56 compute-0 sudo[194693]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sthtfcziwlhcwfftckfvaafstshqruwx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646616.017848-491-268533338409781/AnsiballZ_container_config_hash.py'
Jan 05 20:56:56 compute-0 sudo[194693]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:56 compute-0 python3.9[194695]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 20:56:56 compute-0 sudo[194693]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:57 compute-0 sudo[194845]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ynnvwzjvijgkdjhyptqwtxmvtyoksvsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646617.1414042-500-55863384836621/AnsiballZ_podman_container_info.py'
Jan 05 20:56:57 compute-0 sudo[194845]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:57 compute-0 python3.9[194847]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 05 20:56:58 compute-0 sudo[194845]: pam_unix(sudo:session): session closed for user root
Jan 05 20:56:59 compute-0 sudo[195023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xoilbddblgnwhgapqxttpuhaqppzscrc ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646618.767096-513-140665106353481/AnsiballZ_edpm_container_manage.py'
Jan 05 20:56:59 compute-0 sudo[195023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:56:59 compute-0 python3[195025]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_compute config_id=ceilometer_agent_compute config_overrides={} config_patterns=*.json containers=['ceilometer_agent_compute'] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 20:56:59 compute-0 podman[195062]: 2026-01-05 20:56:59.969154126 +0000 UTC m=+0.075402957 container create dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224)
Jan 05 20:56:59 compute-0 podman[195062]: 2026-01-05 20:56:59.929110726 +0000 UTC m=+0.035359607 image pull 6e61bfccaf21ee9962f8af7b3bc33737123ae362fb340f43cd517263f3ab794c quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested
Jan 05 20:56:59 compute-0 python3[195025]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_compute --conmon-pidfile /run/ceilometer_agent_compute.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6 --healthcheck-command /openstack/healthcheck compute --label config_id=ceilometer_agent_compute --label container_name=ceilometer_agent_compute --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']} --log-driver journald --log-level info --network host --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z --volume /run/libvirt:/run/libvirt:shared,ro --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested kolla_start
Jan 05 20:57:00 compute-0 sudo[195023]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:00 compute-0 sudo[195250]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xmlubnnlabshscqpmkhpikpbbwlocqxj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646620.3966353-521-204457717376302/AnsiballZ_stat.py'
Jan 05 20:57:00 compute-0 sudo[195250]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:00 compute-0 python3.9[195252]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:57:01 compute-0 sudo[195250]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:01 compute-0 sudo[195404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xkwqghgacyzuaspkgueyncxkqlfqafig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646621.3507628-530-269116652305348/AnsiballZ_file.py'
Jan 05 20:57:01 compute-0 sudo[195404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:01 compute-0 python3.9[195406]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:01 compute-0 sudo[195404]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:02 compute-0 sudo[195480]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-narkpyjzprfndvmdrnfsotlxyrfilwyn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646621.3507628-530-269116652305348/AnsiballZ_stat.py'
Jan 05 20:57:02 compute-0 sudo[195480]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:02 compute-0 python3.9[195482]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_compute_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:57:02 compute-0 sudo[195480]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:03 compute-0 sudo[195631]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lyrsuuemmzlrgttbmdeteeniourirlxs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646622.5564096-530-269406499139118/AnsiballZ_copy.py'
Jan 05 20:57:03 compute-0 sudo[195631]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:03 compute-0 python3.9[195633]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767646622.5564096-530-269406499139118/source dest=/etc/systemd/system/edpm_ceilometer_agent_compute.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:03 compute-0 sudo[195631]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:04 compute-0 sudo[195707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vmveafohdrhgdvnlisepbrlqtwhwbuvk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646622.5564096-530-269406499139118/AnsiballZ_systemd.py'
Jan 05 20:57:04 compute-0 sudo[195707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:04 compute-0 python3.9[195709]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 20:57:04 compute-0 systemd[1]: Reloading.
Jan 05 20:57:04 compute-0 systemd-rc-local-generator[195735]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:57:04 compute-0 systemd-sysv-generator[195741]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:57:04 compute-0 sudo[195707]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:05 compute-0 sudo[195818]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdmlefknvjyhhwuorxhujtkvoexacscc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646622.5564096-530-269406499139118/AnsiballZ_systemd.py'
Jan 05 20:57:05 compute-0 sudo[195818]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:05 compute-0 python3.9[195820]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_compute.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:57:05 compute-0 systemd[1]: Reloading.
Jan 05 20:57:05 compute-0 systemd-rc-local-generator[195848]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:57:05 compute-0 systemd-sysv-generator[195852]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:57:06 compute-0 systemd[1]: Starting ceilometer_agent_compute container...
Jan 05 20:57:06 compute-0 systemd[1]: Started libcrun container.
Jan 05 20:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a645af2ba7afa5b52a025a9f486647b85bdf431670bf5f6aa518a363541d6c/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 05 20:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a645af2ba7afa5b52a025a9f486647b85bdf431670bf5f6aa518a363541d6c/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 05 20:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a645af2ba7afa5b52a025a9f486647b85bdf431670bf5f6aa518a363541d6c/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 05 20:57:06 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/17a645af2ba7afa5b52a025a9f486647b85bdf431670bf5f6aa518a363541d6c/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 05 20:57:06 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2.
Jan 05 20:57:06 compute-0 podman[195859]: 2026-01-05 20:57:06.268206791 +0000 UTC m=+0.205987134 container init dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute)
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: + sudo -E kolla_set_configs
Jan 05 20:57:06 compute-0 podman[195859]: 2026-01-05 20:57:06.30141819 +0000 UTC m=+0.239198523 container start dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251224, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 05 20:57:06 compute-0 podman[195859]: ceilometer_agent_compute
Jan 05 20:57:06 compute-0 sudo[195880]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: sudo: unable to send audit message: Operation not permitted
Jan 05 20:57:06 compute-0 systemd[1]: Started ceilometer_agent_compute container.
Jan 05 20:57:06 compute-0 sudo[195880]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 05 20:57:06 compute-0 sudo[195818]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: INFO:__main__:Validating config file
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: INFO:__main__:Copying service configuration files
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: INFO:__main__:Writing out command to execute
Jan 05 20:57:06 compute-0 sudo[195880]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: ++ cat /run_command
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: + ARGS=
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: + sudo kolla_copy_cacerts
Jan 05 20:57:06 compute-0 sudo[195907]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: sudo: unable to send audit message: Operation not permitted
Jan 05 20:57:06 compute-0 sudo[195907]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 05 20:57:06 compute-0 sudo[195907]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: + [[ ! -n '' ]]
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: + . kolla_extend_start
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout'\'''
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: + umask 0022
Jan 05 20:57:06 compute-0 ceilometer_agent_compute[195874]: + exec /usr/bin/ceilometer-polling --polling-namespaces compute --logfile /dev/stdout
Jan 05 20:57:06 compute-0 podman[195881]: 2026-01-05 20:57:06.451427081 +0000 UTC m=+0.120366737 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=1, health_log=, io.buildah.version=1.41.4, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 20:57:06 compute-0 systemd[1]: dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2-6dbd6d75b0422509.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 20:57:06 compute-0 systemd[1]: dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2-6dbd6d75b0422509.service: Failed with result 'exit-code'.
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.306 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:45
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.306 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.306 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.306 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.306 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.306 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.307 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.307 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.307 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.307 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.307 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.307 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.307 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.307 2 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.307 2 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.307 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.307 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.308 2 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.308 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.308 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.308 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.308 2 WARNING oslo_config.cfg [-] Deprecated: Option "tenant_name_discovery" from group "DEFAULT" is deprecated. Use option "identity_name_discovery" from group "DEFAULT".
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.308 2 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.308 2 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.308 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.308 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.309 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.309 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.309 2 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.309 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.309 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.309 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.309 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.309 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.309 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.309 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.309 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.309 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.309 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.310 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.310 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.310 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.310 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.310 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.310 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.310 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.310 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.310 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.310 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.310 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.311 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.311 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.311 2 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.311 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.311 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.311 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.311 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.311 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.311 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.311 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.311 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.311 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.312 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.312 2 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.312 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.312 2 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.312 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.312 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.312 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.312 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.312 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.312 2 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.312 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.312 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.313 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.313 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.313 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.313 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.313 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.313 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.313 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.313 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.313 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.313 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.313 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.314 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.314 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.314 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.314 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.314 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.314 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.314 2 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.314 2 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.314 2 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.314 2 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.314 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.314 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.315 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.315 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.315 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.315 2 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.315 2 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.315 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.315 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.315 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.315 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.315 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.315 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.315 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.316 2 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.316 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.316 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.316 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.316 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.316 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.316 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.316 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.316 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.316 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.316 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.316 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.317 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.317 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.317 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.317 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.317 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.317 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.317 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.317 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.317 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.317 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.317 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.317 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.317 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.318 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.318 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.318 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.318 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.318 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.318 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.318 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.318 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.318 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.318 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.318 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.318 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.318 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.319 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.319 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.319 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.319 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.319 2 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.319 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.343 12 INFO ceilometer.polling.manager [-] Starting heartbeat child service. Listening on /var/lib/ceilometer/ceilometer-compute.socket
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.344 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.344 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.344 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.345 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.345 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.345 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.345 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.346 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.346 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.346 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.346 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.346 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.346 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.347 12 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.347 12 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.347 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.347 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.347 12 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.347 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.347 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.348 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.348 12 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.348 12 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.348 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.348 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.348 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.349 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.349 12 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.349 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.349 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.349 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.349 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.349 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.349 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.350 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.350 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.350 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.350 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.350 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.350 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.350 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.351 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.351 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.351 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.351 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.351 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.351 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.351 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.351 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.352 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.352 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.352 12 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.352 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.352 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.352 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.352 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.353 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.353 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.353 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.353 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.353 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.353 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.353 12 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.354 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.354 12 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.354 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.354 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.354 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.354 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.354 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.355 12 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.355 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.355 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.355 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.355 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.355 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.355 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.356 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.356 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.356 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.356 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.356 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.356 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.356 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.357 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.357 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.357 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.357 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.357 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.357 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.357 12 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.358 12 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.358 12 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.358 12 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.358 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.358 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.358 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.359 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.359 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.359 12 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.359 12 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.359 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.359 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.359 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.359 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.360 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.360 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.360 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.360 12 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.360 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.360 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.360 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.361 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.361 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.361 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.361 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.361 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.361 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.361 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.362 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.362 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.362 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.362 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.362 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.362 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.362 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.363 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.363 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.363 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.363 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.363 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.363 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.363 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.363 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.364 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.364 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.364 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.364 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.364 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.364 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.364 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.365 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.365 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.365 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.365 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.365 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.365 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.365 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.365 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.366 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.366 12 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.366 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 05 20:57:07 compute-0 python3.9[196056]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.366 12 DEBUG cotyledon._service [-] Run service AgentHeartBeatManager(0) [12] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.369 12 DEBUG ceilometer.polling.manager [-] Started heartbeat child process. run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:519
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.373 12 DEBUG ceilometer.polling.manager [-] Started heartbeat update thread _read_queue /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:522
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.374 12 DEBUG ceilometer.polling.manager [-] Started heartbeat reporting thread _report_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:527
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.585 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.596 14 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.596 14 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.596 14 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.728 14 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.12/site-packages/cotyledon/oslo_config_glue.py:53
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.729 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2804
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.729 14 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2805
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.729 14 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'compute', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2806
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.729 14 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2807
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.729 14 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2809
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.729 14 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.729 14 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.729 14 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.730 14 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.730 14 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.730 14 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.730 14 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.730 14 DEBUG cotyledon.oslo_config_glue [-] enable_notifications           = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.730 14 DEBUG cotyledon.oslo_config_glue [-] enable_prometheus_exporter     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.730 14 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.730 14 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.730 14 DEBUG cotyledon.oslo_config_glue [-] heartbeat_socket_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.731 14 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.731 14 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.731 14 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.731 14 DEBUG cotyledon.oslo_config_glue [-] identity_name_discovery        = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.731 14 DEBUG cotyledon.oslo_config_glue [-] ignore_disabled_projects       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.731 14 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.731 14 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.731 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.731 14 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.732 14 DEBUG cotyledon.oslo_config_glue [-] log_color                      = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.732 14 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.732 14 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.732 14 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.732 14 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.732 14 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.732 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.732 14 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.732 14 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.732 14 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.732 14 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.732 14 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.733 14 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.733 14 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.733 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.733 14 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.733 14 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.733 14 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.733 14 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.733 14 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['compute'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.733 14 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.733 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_listen_addresses    = ['127.0.0.1:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.733 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_certfile        = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.734 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_enable          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.734 14 DEBUG cotyledon.oslo_config_glue [-] prometheus_tls_keyfile         = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.734 14 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.734 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.734 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.734 14 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.734 14 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.734 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.734 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.734 14 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.734 14 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.734 14 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.735 14 DEBUG cotyledon.oslo_config_glue [-] shell_completion               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.735 14 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.735 14 DEBUG cotyledon.oslo_config_glue [-] threads_to_process_pollsters   = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.735 14 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.735 14 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.735 14 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.735 14 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.735 14 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2817
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.735 14 DEBUG cotyledon.oslo_config_glue [-] compute.fetch_extra_metadata   = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.735 14 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.735 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.735 14 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.735 14 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.736 14 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.736 14 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.736 14 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.736 14 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.736 14 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.12/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.736 14 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.736 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.736 14 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.736 14 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.736 14 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.736 14 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.737 14 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.737 14 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.737 14 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.737 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_notifications   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.737 14 DEBUG cotyledon.oslo_config_glue [-] polling.enable_prometheus_exporter = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.737 14 DEBUG cotyledon.oslo_config_glue [-] polling.heartbeat_socket_dir   = /var/lib/ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.737 14 DEBUG cotyledon.oslo_config_glue [-] polling.identity_name_discovery = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.737 14 DEBUG cotyledon.oslo_config_glue [-] polling.ignore_disabled_projects = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.737 14 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.737 14 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.737 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_listen_addresses = ['[::]:9101'] log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.737 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_certfile = /etc/ceilometer/tls/tls.crt log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.737 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_enable  = True log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.737 14 DEBUG cotyledon.oslo_config_glue [-] polling.prometheus_tls_keyfile = /etc/ceilometer/tls/tls.key log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.737 14 DEBUG cotyledon.oslo_config_glue [-] polling.threads_to_process_pollsters = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.738 14 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.738 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.738 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.738 14 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.738 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.738 14 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.738 14 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.738 14 DEBUG cotyledon.oslo_config_glue [-] service_types.aodh             = alarming log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.738 14 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.738 14 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.738 14 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.738 14 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_url   = https://keystone-internal.openstack.svc:5000 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.default_domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_id  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.domain_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.password   = **** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.739 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.project_name = service log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.system_scope = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.trust_id   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_id = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_domain_name = Default log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.user_id    = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] service_credentials.username   = ceilometer log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.740 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.741 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.741 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.741 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.741 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.741 14 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.741 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.741 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.741 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.741 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.741 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.741 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.741 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.741 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.741 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.742 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.742 14 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.742 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.742 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.file_event_handler_interval = 1 log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.742 14 DEBUG cotyledon.oslo_config_glue [-] oslo_reports.log_dir           = None log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2824
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.742 14 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.12/site-packages/oslo_config/cfg.py:2828
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.742 14 DEBUG cotyledon._service [-] Run service AgentManager(0) [14] wait_forever /usr/lib/python3.12/site-packages/cotyledon/_service.py:263
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.745 14 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['power.state', 'cpu', 'memory.usage', 'disk.*', 'network.*']}]} load_config /usr/lib/python3.12/site-packages/ceilometer/agent.py:64
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.769 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.769 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.769 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.770 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.770 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.771 14 DEBUG ceilometer.compute.virt.libvirt.utils [-] Connecting to libvirt: qemu:///system new_libvirt_connection /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/utils.py:96
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.771 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.771 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.771 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.772 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.772 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.772 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.772 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.772 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.772 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.772 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.772 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.772 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.773 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.773 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.773 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.773 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.773 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.773 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.773 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.773 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.773 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.774 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.774 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.774 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d78f0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.776 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.776 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.777 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.777 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.777 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.777 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.777 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.778 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.778 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.778 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.778 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.778 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.779 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.779 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.779 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.779 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.779 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.780 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.780 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.780 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.780 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.780 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.781 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.781 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.782 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.783 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.784 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:57:07.784 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:57:08 compute-0 sudo[196219]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcwjvtxanmfmxsyhnbpfujxvkgahrmgn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646627.8026729-571-51026595395072/AnsiballZ_stat.py'
Jan 05 20:57:08 compute-0 sudo[196219]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:08 compute-0 python3.9[196221]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:57:08 compute-0 sudo[196219]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:09 compute-0 sudo[196344]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emjkyklxzrxpjnbycaogkiizoifntzst ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646627.8026729-571-51026595395072/AnsiballZ_copy.py'
Jan 05 20:57:09 compute-0 sudo[196344]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:09 compute-0 python3.9[196346]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646627.8026729-571-51026595395072/.source.yaml _original_basename=.gb84l2qp follow=False checksum=8f71c6c242afdfd056e15e6b24b9016335dadc82 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:09 compute-0 sudo[196344]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:09 compute-0 sudo[196496]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-egukfhtjtvpnolrcguqgfpudadwddctw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646629.5198321-586-88250252027821/AnsiballZ_stat.py'
Jan 05 20:57:09 compute-0 sudo[196496]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:10 compute-0 python3.9[196498]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/node_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:57:10 compute-0 sudo[196496]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:10 compute-0 sudo[196619]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ggwewfwbfeyfaqbtfkkejxuwhewztzjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646629.5198321-586-88250252027821/AnsiballZ_copy.py'
Jan 05 20:57:10 compute-0 sudo[196619]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:10 compute-0 python3.9[196621]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/node_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646629.5198321-586-88250252027821/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:57:10 compute-0 sudo[196619]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:11 compute-0 sudo[196771]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjlwmrighwpgvvwnqfwypqiawloatdck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646631.6089299-607-18488501304995/AnsiballZ_file.py'
Jan 05 20:57:11 compute-0 sudo[196771]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:12 compute-0 python3.9[196773]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:12 compute-0 sudo[196771]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:12 compute-0 sudo[196923]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjzulmagpggjeacsnruvjtgtvustisua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646632.4822154-615-89283167216308/AnsiballZ_file.py'
Jan 05 20:57:12 compute-0 sudo[196923]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:13 compute-0 python3.9[196925]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:57:13 compute-0 sudo[196923]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:13 compute-0 sudo[197075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zvwdzywzrgvreduushqzywxupeoduvwm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646633.3910565-623-184612448982869/AnsiballZ_stat.py'
Jan 05 20:57:13 compute-0 sudo[197075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:14 compute-0 python3.9[197077]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:57:14 compute-0 sudo[197075]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:14 compute-0 sudo[197153]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gcwuxdugggutqdqenxqntuuyhwxcezua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646633.3910565-623-184612448982869/AnsiballZ_file.py'
Jan 05 20:57:14 compute-0 sudo[197153]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:14 compute-0 python3.9[197155]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.nkia_614 recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:14 compute-0 sudo[197153]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:15 compute-0 python3.9[197305]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/node_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:15 compute-0 podman[197330]: 2026-01-05 20:57:15.775996661 +0000 UTC m=+0.127568128 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 05 20:57:17 compute-0 sudo[197753]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-abuhsrbcujsyjegaqpcvwhdjcihjlucb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646637.4551091-660-186446924695411/AnsiballZ_container_config_data.py'
Jan 05 20:57:17 compute-0 sudo[197753]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:18 compute-0 python3.9[197755]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/node_exporter config_pattern=*.json debug=False
Jan 05 20:57:18 compute-0 sudo[197753]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:19 compute-0 sudo[197905]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-orpgtprmdyjdzztjrtdteyozfbeanegg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646638.5986762-671-234954668000330/AnsiballZ_container_config_hash.py'
Jan 05 20:57:19 compute-0 sudo[197905]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:19 compute-0 python3.9[197907]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 20:57:19 compute-0 sudo[197905]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:20 compute-0 sudo[198057]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikevpslafqloaeutmyehnyeyufitmgkx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646639.6540508-680-118394341166125/AnsiballZ_podman_container_info.py'
Jan 05 20:57:20 compute-0 sudo[198057]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:20 compute-0 python3.9[198059]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 05 20:57:20 compute-0 sudo[198057]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:20 compute-0 podman[198111]: 2026-01-05 20:57:20.7434764 +0000 UTC m=+0.083043969 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 05 20:57:21 compute-0 sudo[198256]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nbsqgiglipfzvpfudxzfawcywwxwenrp ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646641.3127584-693-26196836380487/AnsiballZ_edpm_container_manage.py'
Jan 05 20:57:21 compute-0 sudo[198256]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:21 compute-0 python3[198258]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/node_exporter config_id=node_exporter config_overrides={} config_patterns=*.json containers=['node_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 20:57:22 compute-0 podman[198296]: 2026-01-05 20:57:22.243963397 +0000 UTC m=+0.073187358 container create b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, config_id=node_exporter, container_name=node_exporter)
Jan 05 20:57:22 compute-0 podman[198296]: 2026-01-05 20:57:22.209514255 +0000 UTC m=+0.038738206 image pull 0da6a335fe1356545476b749c68f022c897de3a2139e8f0054f6937349ee2b83 quay.io/prometheus/node-exporter:v1.5.0
Jan 05 20:57:22 compute-0 python3[198258]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name node_exporter --conmon-pidfile /run/node_exporter.pid --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6 --healthcheck-command /openstack/healthcheck node_exporter --label config_id=node_exporter --label container_name=node_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9100:9100 --user root --volume /var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z --volume /var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw --volume /var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z quay.io/prometheus/node-exporter:v1.5.0 --web.config.file=/etc/node_exporter/node_exporter.yaml --web.disable-exporter-metrics --collector.systemd --collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service --no-collector.dmi --no-collector.entropy --no-collector.thermal_zone --no-collector.time --no-collector.timex --no-collector.uname --no-collector.stat --no-collector.hwmon --no-collector.os --no-collector.selinux --no-collector.textfile --no-collector.powersupplyclass --no-collector.pressure --no-collector.rapl
Jan 05 20:57:22 compute-0 sudo[198256]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:23 compute-0 sudo[198483]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rtlsrjnqyiujoichsgvoyisulvrkcofu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646642.6284556-701-230462566016459/AnsiballZ_stat.py'
Jan 05 20:57:23 compute-0 sudo[198483]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:23 compute-0 python3.9[198485]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:57:23 compute-0 sudo[198483]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:23 compute-0 sudo[198637]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzpcquijpwawyzxyatfcaeyntpkclkxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646643.5625288-710-152747008200480/AnsiballZ_file.py'
Jan 05 20:57:23 compute-0 sudo[198637]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:24 compute-0 python3.9[198639]: ansible-file Invoked with path=/etc/systemd/system/edpm_node_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:24 compute-0 sudo[198637]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:24 compute-0 sudo[198713]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpgkcprfuiiccdcivnxnvqbqdhqvjvhe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646643.5625288-710-152747008200480/AnsiballZ_stat.py'
Jan 05 20:57:24 compute-0 sudo[198713]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:24 compute-0 python3.9[198715]: ansible-stat Invoked with path=/etc/systemd/system/edpm_node_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:57:24 compute-0 sudo[198713]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:25 compute-0 sudo[198864]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-skeaecxlfupbvvgsivfhzovosfnjjfbz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646644.810268-710-142278838623403/AnsiballZ_copy.py'
Jan 05 20:57:25 compute-0 sudo[198864]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:25 compute-0 python3.9[198866]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767646644.810268-710-142278838623403/source dest=/etc/systemd/system/edpm_node_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:25 compute-0 sudo[198864]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:25 compute-0 sudo[198940]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iltsyiucrxipaelfeuyqkvpbfvkddmva ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646644.810268-710-142278838623403/AnsiballZ_systemd.py'
Jan 05 20:57:25 compute-0 sudo[198940]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:26 compute-0 python3.9[198942]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 20:57:26 compute-0 systemd[1]: Reloading.
Jan 05 20:57:26 compute-0 systemd-sysv-generator[198976]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:57:26 compute-0 systemd-rc-local-generator[198973]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:57:26 compute-0 sudo[198940]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:26 compute-0 sudo[199054]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lcjjlkssfjgrpcexuzvvxsgkameqnwse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646644.810268-710-142278838623403/AnsiballZ_systemd.py'
Jan 05 20:57:26 compute-0 sudo[199054]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:27 compute-0 python3.9[199056]: ansible-systemd Invoked with state=restarted name=edpm_node_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:57:27 compute-0 systemd[1]: Reloading.
Jan 05 20:57:27 compute-0 systemd-sysv-generator[199091]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:57:27 compute-0 systemd-rc-local-generator[199086]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:57:27 compute-0 systemd[1]: Starting node_exporter container...
Jan 05 20:57:27 compute-0 systemd[1]: Started libcrun container.
Jan 05 20:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc19a69a9cf16bf7859b6aaf08a8e5462e92cb5f3bafcac695ef6271cb669939/merged/etc/node_exporter/node_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 05 20:57:27 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/dc19a69a9cf16bf7859b6aaf08a8e5462e92cb5f3bafcac695ef6271cb669939/merged/etc/node_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 05 20:57:27 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4.
Jan 05 20:57:27 compute-0 podman[199096]: 2026-01-05 20:57:27.922221141 +0000 UTC m=+0.183284103 container init b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.945Z caller=node_exporter.go:180 level=info msg="Starting node_exporter" version="(version=1.5.0, branch=HEAD, revision=1b48970ffcf5630534fb00bb0687d73c66d1c959)"
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.945Z caller=node_exporter.go:181 level=info msg="Build context" build_context="(go=go1.19.3, user=root@6e7732a7b81b, date=20221129-18:59:09)"
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.945Z caller=node_exporter.go:183 level=warn msg="Node Exporter is running as root user. This exporter is designed to run as unprivileged user, root is not required."
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.946Z caller=filesystem_common.go:111 level=info collector=filesystem msg="Parsed flag --collector.filesystem.mount-points-exclude" flag=^/(dev|proc|run/credentials/.+|sys|var/lib/docker/.+|var/lib/containers/storage/.+)($|/)
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.946Z caller=filesystem_common.go:113 level=info collector=filesystem msg="Parsed flag --collector.filesystem.fs-types-exclude" flag=^(autofs|binfmt_misc|bpf|cgroup2?|configfs|debugfs|devpts|devtmpfs|fusectl|hugetlbfs|iso9660|mqueue|nsfs|overlay|proc|procfs|pstore|rpc_pipefs|securityfs|selinuxfs|squashfs|sysfs|tracefs)$
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.946Z caller=diskstats_common.go:111 level=info collector=diskstats msg="Parsed flag --collector.diskstats.device-exclude" flag=^(ram|loop|fd|(h|s|v|xv)d[a-z]|nvme\d+n\d+p)\d+$
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.946Z caller=diskstats_linux.go:264 level=error collector=diskstats msg="Failed to open directory, disabling udev device properties" path=/run/udev/data
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.946Z caller=systemd_linux.go:152 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-include" flag=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\.service
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.946Z caller=systemd_linux.go:154 level=info collector=systemd msg="Parsed flag --collector.systemd.unit-exclude" flag=.+\.(automount|device|mount|scope|slice)
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:110 level=info msg="Enabled collectors"
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=arp
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=bcache
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=bonding
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=btrfs
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=conntrack
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=cpu
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=cpufreq
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=diskstats
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=edac
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=fibrechannel
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=filefd
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=filesystem
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=infiniband
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=ipvs
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=loadavg
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=mdadm
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=meminfo
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=netclass
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=netdev
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=netstat
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=nfs
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=nfsd
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=nvme
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=schedstat
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=sockstat
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=softnet
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=systemd
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=tapestats
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=udp_queues
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=vmstat
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.947Z caller=node_exporter.go:117 level=info collector=xfs
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.948Z caller=node_exporter.go:117 level=info collector=zfs
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.949Z caller=tls_config.go:232 level=info msg="Listening on" address=[::]:9100
Jan 05 20:57:27 compute-0 node_exporter[199111]: ts=2026-01-05T20:57:27.950Z caller=tls_config.go:268 level=info msg="TLS is enabled." http2=true address=[::]:9100
Jan 05 20:57:27 compute-0 podman[199096]: 2026-01-05 20:57:27.956320153 +0000 UTC m=+0.217383065 container start b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 20:57:27 compute-0 podman[199096]: node_exporter
Jan 05 20:57:27 compute-0 systemd[1]: Started node_exporter container.
Jan 05 20:57:28 compute-0 sudo[199054]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:28 compute-0 podman[199120]: 2026-01-05 20:57:28.038836637 +0000 UTC m=+0.072301305 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 20:57:28 compute-0 python3.9[199290]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 05 20:57:29 compute-0 sudo[199440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-efgpwdalpsslxmhwimvmlzlaifxsztef ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646649.297774-751-95918443185457/AnsiballZ_stat.py'
Jan 05 20:57:29 compute-0 sudo[199440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:29 compute-0 python3.9[199442]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:57:29 compute-0 sudo[199440]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:30 compute-0 sudo[199565]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-drqyvqtdocvasfwdqxeeeqbtwoamcfgs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646649.297774-751-95918443185457/AnsiballZ_copy.py'
Jan 05 20:57:30 compute-0 sudo[199565]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:30 compute-0 python3.9[199567]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646649.297774-751-95918443185457/.source.yaml _original_basename=.kvb9h2ib follow=False checksum=eb0b5a6a4466f8f0744f170cadd10679e9e71acc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:30 compute-0 sudo[199565]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:31 compute-0 sudo[199717]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rutknhmbnmnvmynhgoinbdragdievuqa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646651.0311573-766-71713028392766/AnsiballZ_stat.py'
Jan 05 20:57:31 compute-0 sudo[199717]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:31 compute-0 python3.9[199719]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/podman_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:57:31 compute-0 sudo[199717]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:32 compute-0 sudo[199840]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pzaatdcqfyftaggbaghuqdzwhmhmawit ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646651.0311573-766-71713028392766/AnsiballZ_copy.py'
Jan 05 20:57:32 compute-0 sudo[199840]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:32 compute-0 python3.9[199842]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/podman_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646651.0311573-766-71713028392766/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:57:32 compute-0 sudo[199840]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:33 compute-0 sudo[199992]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qhfulgbdvgvfkfmakcafkwavtzrtihpl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646653.0539496-787-14517835260124/AnsiballZ_file.py'
Jan 05 20:57:33 compute-0 sudo[199992]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:33 compute-0 python3.9[199994]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:33 compute-0 sudo[199992]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:34 compute-0 sudo[200144]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikhlagolilklytkjjrptajrojurqhxkd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646653.9517825-795-34021132556248/AnsiballZ_file.py'
Jan 05 20:57:34 compute-0 sudo[200144]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:34 compute-0 python3.9[200146]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:57:34 compute-0 sudo[200144]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:35 compute-0 sudo[200296]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vfduojnsjepelutlzpexljpkiubyakqg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646654.764471-803-21799032243255/AnsiballZ_stat.py'
Jan 05 20:57:35 compute-0 sudo[200296]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:35 compute-0 python3.9[200298]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:57:35 compute-0 sudo[200296]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:35 compute-0 sudo[200374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gmowjmiyyagrwducsaejmfzvqidgjmmi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646654.764471-803-21799032243255/AnsiballZ_file.py'
Jan 05 20:57:35 compute-0 sudo[200374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:35 compute-0 python3.9[200376]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.y8nk0s2g recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:35 compute-0 sudo[200374]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:36 compute-0 podman[200500]: 2026-01-05 20:57:36.627938595 +0000 UTC m=+0.089521376 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=starting, health_failing_streak=2, health_log=, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0)
Jan 05 20:57:36 compute-0 systemd[1]: dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2-6dbd6d75b0422509.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 20:57:36 compute-0 systemd[1]: dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2-6dbd6d75b0422509.service: Failed with result 'exit-code'.
Jan 05 20:57:36 compute-0 python3.9[200539]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/podman_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:39 compute-0 sudo[200967]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jenwmqobgpcgjwfefwixnnbpfgtfewfs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646658.9271433-840-203850829274834/AnsiballZ_container_config_data.py'
Jan 05 20:57:39 compute-0 sudo[200967]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:39 compute-0 python3.9[200969]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/podman_exporter config_pattern=*.json debug=False
Jan 05 20:57:39 compute-0 sudo[200967]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:40 compute-0 sudo[201119]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mxoxzyhehdlmcowkivbsyzymascmespp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646659.9848416-851-182458866132203/AnsiballZ_container_config_hash.py'
Jan 05 20:57:40 compute-0 sudo[201119]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:40 compute-0 python3.9[201121]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 20:57:40 compute-0 sudo[201119]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:41 compute-0 sudo[201271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qxxoluaqbutkafctvobaszjbrfogujyd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646660.9078064-860-135942999024188/AnsiballZ_podman_container_info.py'
Jan 05 20:57:41 compute-0 sudo[201271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:41 compute-0 python3.9[201273]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 05 20:57:41 compute-0 sudo[201271]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:42 compute-0 sudo[201449]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fnllzdyqxyvrsuzsvwylzgpvukofhnil ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646662.3495464-873-142061722684460/AnsiballZ_edpm_container_manage.py'
Jan 05 20:57:42 compute-0 sudo[201449]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:57:42.824 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:57:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:57:42.827 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:57:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:57:42.827 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:57:43 compute-0 python3[201451]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/podman_exporter config_id=podman_exporter config_overrides={} config_patterns=*.json containers=['podman_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 20:57:44 compute-0 podman[201464]: 2026-01-05 20:57:44.573911008 +0000 UTC m=+1.448110529 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 05 20:57:44 compute-0 podman[201562]: 2026-01-05 20:57:44.79122337 +0000 UTC m=+0.062958494 container create 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, config_id=podman_exporter, container_name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 20:57:44 compute-0 podman[201562]: 2026-01-05 20:57:44.760631982 +0000 UTC m=+0.032367086 image pull e56d40e393eb5ea8704d9af8cf0d74665df83747106713fda91530f201837815 quay.io/navidys/prometheus-podman-exporter:v1.10.1
Jan 05 20:57:44 compute-0 python3[201451]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name podman_exporter --conmon-pidfile /run/podman_exporter.pid --env CONTAINER_HOST=unix:///run/podman/podman.sock --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6 --healthcheck-command /openstack/healthcheck podman_exporter --label config_id=podman_exporter --label container_name=podman_exporter --label managed_by=edpm_ansible --label config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9882:9882 --user root --volume /var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z --volume /run/podman/podman.sock:/run/podman/podman.sock:rw,z --volume /var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z quay.io/navidys/prometheus-podman-exporter:v1.10.1 --web.config.file=/etc/podman_exporter/podman_exporter.yaml
Jan 05 20:57:45 compute-0 sudo[201449]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:45 compute-0 sudo[201750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-essaozqihkmudfkvpdktjceumtrnpdqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646665.2346988-881-24440355395831/AnsiballZ_stat.py'
Jan 05 20:57:45 compute-0 sudo[201750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:45 compute-0 python3.9[201752]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:57:45 compute-0 sudo[201750]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:46 compute-0 sudo[201916]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iejdumqhjhzqhodcgvslzasvuzrfeapi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646666.1836934-890-85782773753888/AnsiballZ_file.py'
Jan 05 20:57:46 compute-0 sudo[201916]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:46 compute-0 podman[201878]: 2026-01-05 20:57:46.668508669 +0000 UTC m=+0.146579792 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 20:57:46 compute-0 python3.9[201923]: ansible-file Invoked with path=/etc/systemd/system/edpm_podman_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:46 compute-0 sudo[201916]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:47 compute-0 sudo[202006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwiocwxpblucgnpwvbgcqvvozjwsfcag ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646666.1836934-890-85782773753888/AnsiballZ_stat.py'
Jan 05 20:57:47 compute-0 sudo[202006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:47 compute-0 python3.9[202008]: ansible-stat Invoked with path=/etc/systemd/system/edpm_podman_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:57:47 compute-0 sudo[202006]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:48 compute-0 sudo[202157]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nqlanawdruacfmpcucptczjoaagjqnlx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646667.546885-890-55841792245455/AnsiballZ_copy.py'
Jan 05 20:57:48 compute-0 sudo[202157]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:48 compute-0 python3.9[202159]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767646667.546885-890-55841792245455/source dest=/etc/systemd/system/edpm_podman_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:48 compute-0 sudo[202157]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:48 compute-0 sudo[202233]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvuvqjhzqurnbiczczesolbwsyzinppp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646667.546885-890-55841792245455/AnsiballZ_systemd.py'
Jan 05 20:57:48 compute-0 sudo[202233]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:49 compute-0 python3.9[202235]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 20:57:49 compute-0 systemd[1]: Reloading.
Jan 05 20:57:49 compute-0 systemd-sysv-generator[202269]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:57:49 compute-0 systemd-rc-local-generator[202265]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:57:49 compute-0 sudo[202233]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:49 compute-0 sudo[202345]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ojsdbbjpuddrqszfzpdwczxlekojymqf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646667.546885-890-55841792245455/AnsiballZ_systemd.py'
Jan 05 20:57:49 compute-0 sudo[202345]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:50 compute-0 python3.9[202347]: ansible-systemd Invoked with state=restarted name=edpm_podman_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:57:50 compute-0 systemd[1]: Reloading.
Jan 05 20:57:50 compute-0 systemd-rc-local-generator[202372]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:57:50 compute-0 systemd-sysv-generator[202378]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:57:50 compute-0 systemd[1]: Starting podman_exporter container...
Jan 05 20:57:50 compute-0 systemd[1]: Started libcrun container.
Jan 05 20:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa37eb09d202243d0d34ee653cfeb220a439a9ab8f8e32ce0dc164d4affef0ab/merged/etc/podman_exporter/podman_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 05 20:57:50 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/fa37eb09d202243d0d34ee653cfeb220a439a9ab8f8e32ce0dc164d4affef0ab/merged/etc/podman_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 05 20:57:50 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094.
Jan 05 20:57:50 compute-0 podman[202386]: 2026-01-05 20:57:50.820940263 +0000 UTC m=+0.220925898 container init 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 20:57:50 compute-0 podman_exporter[202401]: ts=2026-01-05T20:57:50.840Z caller=exporter.go:68 level=info msg="Starting podman-prometheus-exporter" version="(version=1.10.1, branch=HEAD, revision=1)"
Jan 05 20:57:50 compute-0 podman_exporter[202401]: ts=2026-01-05T20:57:50.840Z caller=exporter.go:69 level=info msg=metrics enhanced=false
Jan 05 20:57:50 compute-0 podman_exporter[202401]: ts=2026-01-05T20:57:50.840Z caller=handler.go:94 level=info msg="enabled collectors"
Jan 05 20:57:50 compute-0 podman_exporter[202401]: ts=2026-01-05T20:57:50.840Z caller=handler.go:105 level=info collector=container
Jan 05 20:57:50 compute-0 podman[202386]: 2026-01-05 20:57:50.854556982 +0000 UTC m=+0.254542617 container start 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 20:57:50 compute-0 podman[202386]: podman_exporter
Jan 05 20:57:50 compute-0 systemd[1]: Starting Podman API Service...
Jan 05 20:57:50 compute-0 systemd[1]: Started podman_exporter container.
Jan 05 20:57:50 compute-0 systemd[1]: Started Podman API Service.
Jan 05 20:57:50 compute-0 podman[202404]: 2026-01-05 20:57:50.875054181 +0000 UTC m=+0.061645800 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 20:57:50 compute-0 podman[202426]: time="2026-01-05T20:57:50Z" level=info msg="/usr/bin/podman filtering at log level info"
Jan 05 20:57:50 compute-0 podman[202426]: time="2026-01-05T20:57:50Z" level=info msg="Setting parallel job count to 25"
Jan 05 20:57:50 compute-0 podman[202426]: time="2026-01-05T20:57:50Z" level=info msg="Using sqlite as database backend"
Jan 05 20:57:50 compute-0 podman[202426]: time="2026-01-05T20:57:50Z" level=info msg="Not using native diff for overlay, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled"
Jan 05 20:57:50 compute-0 podman[202426]: time="2026-01-05T20:57:50Z" level=info msg="Using systemd socket activation to determine API endpoint"
Jan 05 20:57:50 compute-0 podman[202426]: time="2026-01-05T20:57:50Z" level=info msg="API service listening on \"/run/podman/podman.sock\". URI: \"unix:///run/podman/podman.sock\""
Jan 05 20:57:50 compute-0 sudo[202345]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:50 compute-0 podman[202426]: @ - - [05/Jan/2026:20:57:50 +0000] "GET /v4.9.3/libpod/_ping HTTP/1.1" 200 2 "" "Go-http-client/1.1"
Jan 05 20:57:50 compute-0 podman[202426]: time="2026-01-05T20:57:50Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 20:57:50 compute-0 podman[202426]: @ - - [05/Jan/2026:20:57:50 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=true&sync=false HTTP/1.1" 200 18094 "" "Go-http-client/1.1"
Jan 05 20:57:50 compute-0 podman_exporter[202401]: ts=2026-01-05T20:57:50.949Z caller=exporter.go:96 level=info msg="Listening on" address=:9882
Jan 05 20:57:50 compute-0 podman_exporter[202401]: ts=2026-01-05T20:57:50.950Z caller=tls_config.go:313 level=info msg="Listening on" address=[::]:9882
Jan 05 20:57:50 compute-0 podman[202419]: 2026-01-05 20:57:50.950370685 +0000 UTC m=+0.082113567 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=starting, health_failing_streak=1, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 20:57:50 compute-0 podman_exporter[202401]: ts=2026-01-05T20:57:50.950Z caller=tls_config.go:349 level=info msg="TLS is enabled." http2=true address=[::]:9882
Jan 05 20:57:50 compute-0 systemd[1]: 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094-79daa4fab1ac970c.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 20:57:50 compute-0 systemd[1]: 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094-79daa4fab1ac970c.service: Failed with result 'exit-code'.
Jan 05 20:57:52 compute-0 python3.9[202610]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 05 20:57:52 compute-0 nova_compute[186018]: 2026-01-05 20:57:52.950 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:57:52 compute-0 nova_compute[186018]: 2026-01-05 20:57:52.979 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:57:52 compute-0 nova_compute[186018]: 2026-01-05 20:57:52.980 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 20:57:52 compute-0 nova_compute[186018]: 2026-01-05 20:57:52.980 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.042 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.043 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.044 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.045 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.046 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.046 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.074 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.075 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.075 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.075 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 20:57:53 compute-0 auditd[703]: Audit daemon rotating log files
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.327 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.328 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5853MB free_disk=72.59503936767578GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.329 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.329 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.409 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.410 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.440 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.457 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.459 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 20:57:53 compute-0 nova_compute[186018]: 2026-01-05 20:57:53.459 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.130s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:57:53 compute-0 sudo[202760]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cprxbjejjynnpufzvjfscbidjzesdkfx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646673.271317-931-68545005489188/AnsiballZ_stat.py'
Jan 05 20:57:53 compute-0 sudo[202760]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:53 compute-0 python3.9[202762]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:57:53 compute-0 sudo[202760]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:54 compute-0 sudo[202885]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-upgjubisjiwqllicvjwxuvfazbpehwsd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646673.271317-931-68545005489188/AnsiballZ_copy.py'
Jan 05 20:57:54 compute-0 sudo[202885]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:54 compute-0 python3.9[202887]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646673.271317-931-68545005489188/.source.yaml _original_basename=.j5fzh4z5 follow=False checksum=99953c157a188f8671e411b34751c2518bebf948 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:54 compute-0 sudo[202885]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:54 compute-0 nova_compute[186018]: 2026-01-05 20:57:54.874 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:57:54 compute-0 nova_compute[186018]: 2026-01-05 20:57:54.875 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:57:54 compute-0 nova_compute[186018]: 2026-01-05 20:57:54.875 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:57:54 compute-0 nova_compute[186018]: 2026-01-05 20:57:54.876 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:57:55 compute-0 sudo[203037]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lmmxyastqttgzcsgrqrchaiylbgknqai ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646674.8349888-946-17127087850108/AnsiballZ_stat.py'
Jan 05 20:57:55 compute-0 sudo[203037]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:55 compute-0 python3.9[203039]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/openstack_network_exporter/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:57:55 compute-0 sudo[203037]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:55 compute-0 sudo[203160]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gjwavfvmafyksvhcgreqhzdzylqlnbpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646674.8349888-946-17127087850108/AnsiballZ_copy.py'
Jan 05 20:57:55 compute-0 sudo[203160]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:56 compute-0 python3.9[203162]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/openstack_network_exporter/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646674.8349888-946-17127087850108/.source _original_basename=healthcheck follow=False checksum=e380c11c36804bfc65a818f2960cfa663daacfe5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:57:56 compute-0 sudo[203160]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:57 compute-0 sudo[203312]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wazizvktjuoufignelnqiabnrccurwdv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646676.7219095-967-170006427532777/AnsiballZ_file.py'
Jan 05 20:57:57 compute-0 sudo[203312]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:57 compute-0 python3.9[203314]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:57 compute-0 sudo[203312]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:58 compute-0 sudo[203464]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vpctgmswuwwosjxsgwmijgrlzdwtfevt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646677.605762-975-105673648350187/AnsiballZ_file.py'
Jan 05 20:57:58 compute-0 sudo[203464]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:58 compute-0 python3.9[203466]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:57:58 compute-0 sudo[203464]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:58 compute-0 podman[203544]: 2026-01-05 20:57:58.747107487 +0000 UTC m=+0.082548059 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 20:57:58 compute-0 sudo[203641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hljovczokzbelbwdidbqsffnlomymbop ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646678.4693947-983-57255326994375/AnsiballZ_stat.py'
Jan 05 20:57:58 compute-0 sudo[203641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:59 compute-0 python3.9[203643]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_compute.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:57:59 compute-0 sudo[203641]: pam_unix(sudo:session): session closed for user root
Jan 05 20:57:59 compute-0 sudo[203719]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gzbwusnzkdwvtqesjpjqtmrwavjxsgth ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646678.4693947-983-57255326994375/AnsiballZ_file.py'
Jan 05 20:57:59 compute-0 sudo[203719]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:57:59 compute-0 python3.9[203721]: ansible-ansible.legacy.file Invoked with mode=0600 dest=/var/lib/kolla/config_files/ceilometer_agent_compute.json _original_basename=.rcbbs45l recurse=False state=file path=/var/lib/kolla/config_files/ceilometer_agent_compute.json force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:57:59 compute-0 sudo[203719]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:00 compute-0 python3.9[203871]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:02 compute-0 sudo[204292]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vjapiailmnidcogkoxdzautqnvsmnttt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646682.5057878-1020-8790667845458/AnsiballZ_container_config_data.py'
Jan 05 20:58:02 compute-0 sudo[204292]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:03 compute-0 python3.9[204294]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_pattern=*.json debug=False
Jan 05 20:58:03 compute-0 sudo[204292]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:03 compute-0 sudo[204444]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jqgifwuunalyznbzrrqpxsmcmmadbwav ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646683.462446-1031-262106041642627/AnsiballZ_container_config_hash.py'
Jan 05 20:58:03 compute-0 sudo[204444]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:04 compute-0 python3.9[204446]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 20:58:04 compute-0 sudo[204444]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:04 compute-0 sudo[204596]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lfbnzfyfdqfqqzivmtmcgknzmlqizjaw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646684.4010136-1040-246043485334826/AnsiballZ_podman_container_info.py'
Jan 05 20:58:04 compute-0 sudo[204596]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:05 compute-0 python3.9[204598]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 05 20:58:05 compute-0 sudo[204596]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:06 compute-0 sudo[204775]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auryipmvwtoxgerptgrayxbxuzoofvoi ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646686.1552806-1053-248782998120623/AnsiballZ_edpm_container_manage.py'
Jan 05 20:58:06 compute-0 sudo[204775]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:06 compute-0 python3[204777]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/openstack_network_exporter config_id=openstack_network_exporter config_overrides={} config_patterns=*.json containers=['openstack_network_exporter'] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 20:58:07 compute-0 podman[204803]: 2026-01-05 20:58:07.767415174 +0000 UTC m=+0.101965078 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=unhealthy, health_failing_streak=3, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.build-date=20251224, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Jan 05 20:58:07 compute-0 systemd[1]: dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2-6dbd6d75b0422509.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 20:58:07 compute-0 systemd[1]: dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2-6dbd6d75b0422509.service: Failed with result 'exit-code'.
Jan 05 20:58:09 compute-0 podman[204790]: 2026-01-05 20:58:09.565784311 +0000 UTC m=+2.655700048 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 05 20:58:09 compute-0 podman[204907]: 2026-01-05 20:58:09.715975078 +0000 UTC m=+0.053715388 container create aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, release=1755695350, version=9.6, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, name=ubi9-minimal)
Jan 05 20:58:09 compute-0 podman[204907]: 2026-01-05 20:58:09.690070735 +0000 UTC m=+0.027811085 image pull 186c5e97c6f6912533851a0044ea6da23938910e7bddfb4a6c0be9b48ab2a1d1 quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 05 20:58:09 compute-0 python3[204777]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name openstack_network_exporter --conmon-pidfile /run/openstack_network_exporter.pid --env OPENSTACK_NETWORK_EXPORTER_YAML=/etc/openstack_network_exporter/openstack_network_exporter.yaml --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6 --healthcheck-command /openstack/healthcheck openstack-netwo --label config_id=openstack_network_exporter --label container_name=openstack_network_exporter --label managed_by=edpm_ansible --label config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 9105:9105 --volume /var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z --volume /var/run/openvswitch:/run/openvswitch:rw,z --volume /var/lib/openvswitch/ovn:/run/ovn:rw,z --volume /proc:/host/proc:ro --volume /var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified
Jan 05 20:58:09 compute-0 sudo[204775]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:10 compute-0 sudo[205095]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jigzepwobxortwwodwgnqhbvjrhxrrmy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646690.1406133-1061-15256887282232/AnsiballZ_stat.py'
Jan 05 20:58:10 compute-0 sudo[205095]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:10 compute-0 python3.9[205097]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:58:10 compute-0 sudo[205095]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:11 compute-0 sudo[205249]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pucrwtyhynuwxmxsippxatdzwnwqxbil ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646691.217511-1070-32338782225762/AnsiballZ_file.py'
Jan 05 20:58:11 compute-0 sudo[205249]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:11 compute-0 python3.9[205251]: ansible-file Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:11 compute-0 sudo[205249]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:12 compute-0 sudo[205325]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wngidhlyihkemfzbjyohcutegotdrzuf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646691.217511-1070-32338782225762/AnsiballZ_stat.py'
Jan 05 20:58:12 compute-0 sudo[205325]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:12 compute-0 python3.9[205327]: ansible-stat Invoked with path=/etc/systemd/system/edpm_openstack_network_exporter_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:58:12 compute-0 sudo[205325]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:13 compute-0 sudo[205476]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vabsuxajtksosaeiksfibhwhyzhefumu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646692.5043786-1070-260285411032852/AnsiballZ_copy.py'
Jan 05 20:58:13 compute-0 sudo[205476]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:13 compute-0 python3.9[205478]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767646692.5043786-1070-260285411032852/source dest=/etc/systemd/system/edpm_openstack_network_exporter.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:13 compute-0 sudo[205476]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:13 compute-0 sudo[205552]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fehrncanzvombhggdojgfvqokrzovtlo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646692.5043786-1070-260285411032852/AnsiballZ_systemd.py'
Jan 05 20:58:13 compute-0 sudo[205552]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:14 compute-0 python3.9[205554]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 20:58:14 compute-0 systemd[1]: Reloading.
Jan 05 20:58:14 compute-0 systemd-rc-local-generator[205581]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:58:14 compute-0 systemd-sysv-generator[205585]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:58:14 compute-0 sudo[205552]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:14 compute-0 sudo[205663]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lkswlscvyyhwxsozsinkvdcxepcknypz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646692.5043786-1070-260285411032852/AnsiballZ_systemd.py'
Jan 05 20:58:14 compute-0 sudo[205663]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:15 compute-0 python3.9[205665]: ansible-systemd Invoked with state=restarted name=edpm_openstack_network_exporter.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:58:15 compute-0 systemd[1]: Reloading.
Jan 05 20:58:15 compute-0 systemd-rc-local-generator[205696]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:58:15 compute-0 systemd-sysv-generator[205699]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:58:15 compute-0 systemd[1]: Starting openstack_network_exporter container...
Jan 05 20:58:15 compute-0 systemd[1]: Started libcrun container.
Jan 05 20:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a61d9966f3683d4676340b2cda4c1b6593a23a4bcc50156a4ffeb6faa22836/merged/run/ovn supports timestamps until 2038 (0x7fffffff)
Jan 05 20:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a61d9966f3683d4676340b2cda4c1b6593a23a4bcc50156a4ffeb6faa22836/merged/etc/openstack_network_exporter/openstack_network_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 05 20:58:15 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/e2a61d9966f3683d4676340b2cda4c1b6593a23a4bcc50156a4ffeb6faa22836/merged/etc/openstack_network_exporter/tls supports timestamps until 2038 (0x7fffffff)
Jan 05 20:58:15 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb.
Jan 05 20:58:15 compute-0 podman[205705]: 2026-01-05 20:58:15.86566322 +0000 UTC m=+0.191909793 container init aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, architecture=x86_64)
Jan 05 20:58:15 compute-0 openstack_network_exporter[205720]: INFO    20:58:15 main.go:48: registering *bridge.Collector
Jan 05 20:58:15 compute-0 openstack_network_exporter[205720]: INFO    20:58:15 main.go:48: registering *coverage.Collector
Jan 05 20:58:15 compute-0 openstack_network_exporter[205720]: INFO    20:58:15 main.go:48: registering *datapath.Collector
Jan 05 20:58:15 compute-0 openstack_network_exporter[205720]: INFO    20:58:15 main.go:48: registering *iface.Collector
Jan 05 20:58:15 compute-0 openstack_network_exporter[205720]: INFO    20:58:15 main.go:48: registering *memory.Collector
Jan 05 20:58:15 compute-0 openstack_network_exporter[205720]: INFO    20:58:15 main.go:55: *ovnnorthd.Collector not registered, metric set not enabled
Jan 05 20:58:15 compute-0 openstack_network_exporter[205720]: INFO    20:58:15 main.go:48: registering *ovn.Collector
Jan 05 20:58:15 compute-0 openstack_network_exporter[205720]: INFO    20:58:15 main.go:55: *ovsdbserver.Collector not registered, metric set not enabled
Jan 05 20:58:15 compute-0 openstack_network_exporter[205720]: INFO    20:58:15 main.go:48: registering *pmd_perf.Collector
Jan 05 20:58:15 compute-0 openstack_network_exporter[205720]: INFO    20:58:15 main.go:48: registering *pmd_rxq.Collector
Jan 05 20:58:15 compute-0 openstack_network_exporter[205720]: INFO    20:58:15 main.go:48: registering *vswitch.Collector
Jan 05 20:58:15 compute-0 openstack_network_exporter[205720]: NOTICE  20:58:15 main.go:76: listening on https://:9105/metrics
Jan 05 20:58:15 compute-0 podman[205705]: 2026-01-05 20:58:15.905070414 +0000 UTC m=+0.231316987 container start aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, distribution-scope=public, managed_by=edpm_ansible, name=ubi9-minimal)
Jan 05 20:58:15 compute-0 podman[205705]: openstack_network_exporter
Jan 05 20:58:15 compute-0 systemd[1]: Started openstack_network_exporter container.
Jan 05 20:58:15 compute-0 sudo[205663]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:16 compute-0 podman[205730]: 2026-01-05 20:58:16.018053476 +0000 UTC m=+0.092370222 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9-minimal, architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9)
Jan 05 20:58:16 compute-0 python3.9[205905]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 05 20:58:17 compute-0 sudo[206075]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pawcbvpexdbsjptalewpquuyhejtlmls ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646697.3330665-1111-66467450646299/AnsiballZ_stat.py'
Jan 05 20:58:17 compute-0 sudo[206075]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:17 compute-0 podman[206012]: 2026-01-05 20:58:17.813645659 +0000 UTC m=+0.153709272 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 05 20:58:17 compute-0 python3.9[206083]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:58:18 compute-0 sudo[206075]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:18 compute-0 sudo[206206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vodxkcntpsvaybklpcnxubtiucmplowz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646697.3330665-1111-66467450646299/AnsiballZ_copy.py'
Jan 05 20:58:18 compute-0 sudo[206206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:18 compute-0 python3.9[206208]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646697.3330665-1111-66467450646299/.source.yaml _original_basename=.junu9oe0 follow=False checksum=a8a1570a79f428aeaa486431b05b36284cb535b1 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:18 compute-0 sudo[206206]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:19 compute-0 sudo[206358]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ffebanpmotonztsxsrgkqjksvrdflcas ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646699.0259397-1126-148189062365068/AnsiballZ_find.py'
Jan 05 20:58:19 compute-0 sudo[206358]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:19 compute-0 python3.9[206360]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 05 20:58:19 compute-0 sudo[206358]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:20 compute-0 sudo[206510]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kvvnvwjlwtzsbqdgkgidwaizbrsfpsnz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646700.048641-1136-250960652276274/AnsiballZ_podman_container_info.py'
Jan 05 20:58:20 compute-0 sudo[206510]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:20 compute-0 python3.9[206512]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 05 20:58:20 compute-0 sudo[206510]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:21 compute-0 sudo[206699]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-viqyuijyxjmahpzyvfbmugdkdthancyy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646701.1042516-1144-183989003807944/AnsiballZ_podman_container_exec.py'
Jan 05 20:58:21 compute-0 sudo[206699]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:21 compute-0 podman[206650]: 2026-01-05 20:58:21.757732363 +0000 UTC m=+0.101653910 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 20:58:21 compute-0 podman[206651]: 2026-01-05 20:58:21.757833406 +0000 UTC m=+0.094739945 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 20:58:21 compute-0 python3.9[206714]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 20:58:22 compute-0 systemd[1]: Started libpod-conmon-8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4.scope.
Jan 05 20:58:22 compute-0 podman[206720]: 2026-01-05 20:58:22.103388738 +0000 UTC m=+0.123381591 container exec 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.vendor=CentOS, container_name=ovn_controller)
Jan 05 20:58:22 compute-0 podman[206720]: 2026-01-05 20:58:22.114921636 +0000 UTC m=+0.134914479 container exec_died 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 05 20:58:22 compute-0 sudo[206699]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:22 compute-0 systemd[1]: libpod-conmon-8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4.scope: Deactivated successfully.
Jan 05 20:58:22 compute-0 sudo[206901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ugpqnyucelhxjjzzyfmiddjzkpybhxck ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646702.4239316-1152-127160114498217/AnsiballZ_podman_container_exec.py'
Jan 05 20:58:22 compute-0 sudo[206901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:23 compute-0 python3.9[206903]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 20:58:23 compute-0 systemd[1]: Started libpod-conmon-8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4.scope.
Jan 05 20:58:23 compute-0 podman[206904]: 2026-01-05 20:58:23.268003924 +0000 UTC m=+0.116228819 container exec 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 05 20:58:23 compute-0 podman[206904]: 2026-01-05 20:58:23.299717122 +0000 UTC m=+0.147942017 container exec_died 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team)
Jan 05 20:58:23 compute-0 systemd[1]: libpod-conmon-8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4.scope: Deactivated successfully.
Jan 05 20:58:23 compute-0 sudo[206901]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:24 compute-0 sudo[207086]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxrhhdoggfqjfcdojhccdjicgvhklmyq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646703.6341777-1160-82471234813042/AnsiballZ_file.py'
Jan 05 20:58:24 compute-0 sudo[207086]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:24 compute-0 python3.9[207088]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:24 compute-0 sudo[207086]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:25 compute-0 sudo[207238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-reqxgzilvkvadfquekikbnyxqrtrvucb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646704.6476834-1169-82644614181434/AnsiballZ_podman_container_info.py'
Jan 05 20:58:25 compute-0 sudo[207238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:25 compute-0 python3.9[207240]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 05 20:58:25 compute-0 sudo[207238]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:26 compute-0 sudo[207404]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xwkuqfwvaogokajjxenlawoswjcpuczu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646705.7117574-1177-237141577413762/AnsiballZ_podman_container_exec.py'
Jan 05 20:58:26 compute-0 sudo[207404]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:26 compute-0 python3.9[207406]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 20:58:26 compute-0 systemd[1]: Started libpod-conmon-490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39.scope.
Jan 05 20:58:26 compute-0 podman[207407]: 2026-01-05 20:58:26.48980777 +0000 UTC m=+0.105675656 container exec 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 20:58:26 compute-0 podman[207407]: 2026-01-05 20:58:26.522392512 +0000 UTC m=+0.138260358 container exec_died 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible)
Jan 05 20:58:26 compute-0 systemd[1]: libpod-conmon-490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39.scope: Deactivated successfully.
Jan 05 20:58:26 compute-0 sudo[207404]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:27 compute-0 sudo[207590]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oiaudehuoyniunyhetjejmyxlfhritei ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646706.851241-1185-121745847752272/AnsiballZ_podman_container_exec.py'
Jan 05 20:58:27 compute-0 sudo[207590]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:27 compute-0 python3.9[207592]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 20:58:27 compute-0 systemd[1]: Started libpod-conmon-490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39.scope.
Jan 05 20:58:27 compute-0 podman[207593]: 2026-01-05 20:58:27.599497359 +0000 UTC m=+0.093923723 container exec 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 05 20:58:27 compute-0 podman[207593]: 2026-01-05 20:58:27.63095098 +0000 UTC m=+0.125377314 container exec_died 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 05 20:58:27 compute-0 systemd[1]: libpod-conmon-490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39.scope: Deactivated successfully.
Jan 05 20:58:27 compute-0 sudo[207590]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:28 compute-0 sudo[207772]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tlobqcxiuyfapztcaemmpsagbmflowic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646707.9279864-1193-218829772849201/AnsiballZ_file.py'
Jan 05 20:58:28 compute-0 sudo[207772]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:28 compute-0 python3.9[207774]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:28 compute-0 sudo[207772]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:29 compute-0 sudo[207935]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tggwfrmpcxspodauabidksealqlmdttu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646708.8818734-1202-85171817234371/AnsiballZ_podman_container_info.py'
Jan 05 20:58:29 compute-0 sudo[207935]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:29 compute-0 podman[207898]: 2026-01-05 20:58:29.276283625 +0000 UTC m=+0.069897681 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 20:58:29 compute-0 python3.9[207944]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 05 20:58:29 compute-0 sudo[207935]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:30 compute-0 sudo[208114]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-olxurvsbvhcljnjrcaumghrqctxihgfk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646709.8114939-1210-196608046476102/AnsiballZ_podman_container_exec.py'
Jan 05 20:58:30 compute-0 sudo[208114]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:30 compute-0 python3.9[208116]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 20:58:30 compute-0 systemd[1]: Started libpod-conmon-dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2.scope.
Jan 05 20:58:30 compute-0 podman[208117]: 2026-01-05 20:58:30.635145407 +0000 UTC m=+0.114199666 container exec dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 05 20:58:30 compute-0 podman[208117]: 2026-01-05 20:58:30.672066134 +0000 UTC m=+0.151120333 container exec_died dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 05 20:58:30 compute-0 systemd[1]: libpod-conmon-dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2.scope: Deactivated successfully.
Jan 05 20:58:30 compute-0 sudo[208114]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:31 compute-0 sudo[208297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wqrgiquxpfgenwjygvybqghdzhlcahkr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646711.0063307-1218-121823034929079/AnsiballZ_podman_container_exec.py'
Jan 05 20:58:31 compute-0 sudo[208297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:31 compute-0 python3.9[208299]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 20:58:31 compute-0 systemd[1]: Started libpod-conmon-dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2.scope.
Jan 05 20:58:31 compute-0 podman[208300]: 2026-01-05 20:58:31.747942888 +0000 UTC m=+0.087138901 container exec dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 05 20:58:31 compute-0 podman[208300]: 2026-01-05 20:58:31.77864974 +0000 UTC m=+0.117845753 container exec_died dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 05 20:58:31 compute-0 systemd[1]: libpod-conmon-dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2.scope: Deactivated successfully.
Jan 05 20:58:31 compute-0 sudo[208297]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:32 compute-0 sudo[208481]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zebzfhmucqceivnuheepnvcrubwubrse ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646712.1456509-1226-106842338424549/AnsiballZ_file.py'
Jan 05 20:58:32 compute-0 sudo[208481]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:32 compute-0 python3.9[208483]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:32 compute-0 sudo[208481]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:33 compute-0 sudo[208633]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-buekrjkfmtgwixoahycejazkvpzzzmrg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646713.0067892-1235-138880729186251/AnsiballZ_podman_container_info.py'
Jan 05 20:58:33 compute-0 sudo[208633]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:33 compute-0 python3.9[208635]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 05 20:58:33 compute-0 sudo[208633]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:34 compute-0 sudo[208798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pmcusrbhyhxncihlhektbqzekcchfnxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646714.0161734-1243-110883337926658/AnsiballZ_podman_container_exec.py'
Jan 05 20:58:34 compute-0 sudo[208798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:34 compute-0 python3.9[208800]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 20:58:34 compute-0 systemd[1]: Started libpod-conmon-b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4.scope.
Jan 05 20:58:34 compute-0 podman[208801]: 2026-01-05 20:58:34.785066705 +0000 UTC m=+0.112223172 container exec b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 20:58:34 compute-0 podman[208801]: 2026-01-05 20:58:34.82075298 +0000 UTC m=+0.147909457 container exec_died b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 20:58:34 compute-0 systemd[1]: libpod-conmon-b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4.scope: Deactivated successfully.
Jan 05 20:58:34 compute-0 sudo[208798]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:35 compute-0 sudo[208980]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqwbsyopzfknivzaovprscefqugqbosj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646715.1195848-1251-10326613760709/AnsiballZ_podman_container_exec.py'
Jan 05 20:58:35 compute-0 sudo[208980]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:35 compute-0 python3.9[208982]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 20:58:35 compute-0 systemd[1]: Started libpod-conmon-b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4.scope.
Jan 05 20:58:35 compute-0 podman[208983]: 2026-01-05 20:58:35.917118392 +0000 UTC m=+0.123244557 container exec b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 20:58:35 compute-0 podman[208983]: 2026-01-05 20:58:35.956099704 +0000 UTC m=+0.162225849 container exec_died b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 20:58:36 compute-0 systemd[1]: libpod-conmon-b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4.scope: Deactivated successfully.
Jan 05 20:58:36 compute-0 sudo[208980]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:36 compute-0 sudo[209164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjajpkqppwvokzzpwiaqbzuyngngjfjw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646716.2484553-1259-274013717556257/AnsiballZ_file.py'
Jan 05 20:58:36 compute-0 sudo[209164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:36 compute-0 python3.9[209166]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:36 compute-0 sudo[209164]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:37 compute-0 sudo[209316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vsoqcitneomhxqfpzffayjfnxmcuzmjo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646717.1756551-1268-91302315725570/AnsiballZ_podman_container_info.py'
Jan 05 20:58:37 compute-0 sudo[209316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:37 compute-0 python3.9[209318]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 05 20:58:37 compute-0 sudo[209316]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:38 compute-0 sudo[209495]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-glhzuuvvleiizechhgesrygifphirsfz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646718.1970153-1276-245766425497508/AnsiballZ_podman_container_exec.py'
Jan 05 20:58:38 compute-0 sudo[209495]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:38 compute-0 podman[209456]: 2026-01-05 20:58:38.706523563 +0000 UTC m=+0.114637406 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Jan 05 20:58:38 compute-0 python3.9[209501]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 20:58:39 compute-0 systemd[1]: Started libpod-conmon-8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094.scope.
Jan 05 20:58:39 compute-0 podman[209505]: 2026-01-05 20:58:39.027637274 +0000 UTC m=+0.107180770 container exec 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 20:58:39 compute-0 podman[209505]: 2026-01-05 20:58:39.060826103 +0000 UTC m=+0.140369589 container exec_died 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 20:58:39 compute-0 systemd[1]: libpod-conmon-8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094.scope: Deactivated successfully.
Jan 05 20:58:39 compute-0 sudo[209495]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:39 compute-0 sudo[209685]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tufvcptpnjveknscpudyolgujrvocjxg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646719.349881-1284-198567395578285/AnsiballZ_podman_container_exec.py'
Jan 05 20:58:39 compute-0 sudo[209685]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:39 compute-0 python3.9[209687]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 20:58:40 compute-0 systemd[1]: Started libpod-conmon-8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094.scope.
Jan 05 20:58:40 compute-0 podman[209688]: 2026-01-05 20:58:40.08940412 +0000 UTC m=+0.111347877 container exec 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 20:58:40 compute-0 podman[209688]: 2026-01-05 20:58:40.125597348 +0000 UTC m=+0.147541075 container exec_died 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 20:58:40 compute-0 systemd[1]: libpod-conmon-8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094.scope: Deactivated successfully.
Jan 05 20:58:40 compute-0 sudo[209685]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:40 compute-0 sudo[209870]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbxxwfzyqkayyztuhgmleczgkobpgyfe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646720.422839-1292-135830421474742/AnsiballZ_file.py'
Jan 05 20:58:40 compute-0 sudo[209870]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:41 compute-0 python3.9[209872]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:41 compute-0 sudo[209870]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:41 compute-0 sudo[210022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mitcomsbheacojiviapngqfxmdcncjgm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646721.3579793-1301-221406399389241/AnsiballZ_podman_container_info.py'
Jan 05 20:58:41 compute-0 sudo[210022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:41 compute-0 python3.9[210024]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 05 20:58:42 compute-0 sudo[210022]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:42 compute-0 sudo[210188]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oduqntfwvrocajbilgrkdxmykyowojte ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646722.334154-1309-14276376215629/AnsiballZ_podman_container_exec.py'
Jan 05 20:58:42 compute-0 sudo[210188]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:58:42.826 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:58:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:58:42.828 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:58:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:58:42.828 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:58:42 compute-0 python3.9[210190]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 20:58:43 compute-0 systemd[1]: Started libpod-conmon-aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb.scope.
Jan 05 20:58:43 compute-0 podman[210191]: 2026-01-05 20:58:43.113009242 +0000 UTC m=+0.103829981 container exec aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, version=9.6, container_name=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vendor=Red Hat, Inc.)
Jan 05 20:58:43 compute-0 podman[210191]: 2026-01-05 20:58:43.149963239 +0000 UTC m=+0.140783988 container exec_died aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, version=9.6, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, vendor=Red Hat, Inc., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 05 20:58:43 compute-0 systemd[1]: libpod-conmon-aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb.scope: Deactivated successfully.
Jan 05 20:58:43 compute-0 sudo[210188]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:43 compute-0 sudo[210373]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iokfvfbpmsdplvcxdboadoqikozmjnbi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646723.4764595-1317-249507073947070/AnsiballZ_podman_container_exec.py'
Jan 05 20:58:43 compute-0 sudo[210373]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:44 compute-0 python3.9[210375]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 20:58:44 compute-0 systemd[1]: Started libpod-conmon-aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb.scope.
Jan 05 20:58:44 compute-0 podman[210376]: 2026-01-05 20:58:44.220202467 +0000 UTC m=+0.088041157 container exec aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, io.buildah.version=1.33.7, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, architecture=x86_64, io.openshift.expose-services=, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41)
Jan 05 20:58:44 compute-0 podman[210376]: 2026-01-05 20:58:44.25697139 +0000 UTC m=+0.124810040 container exec_died aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, version=9.6, architecture=x86_64, distribution-scope=public, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=)
Jan 05 20:58:44 compute-0 systemd[1]: libpod-conmon-aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb.scope: Deactivated successfully.
Jan 05 20:58:44 compute-0 sudo[210373]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:44 compute-0 sudo[210558]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htttiuaumfppspuzapzpjffzaqgtitxw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646724.543751-1325-88432989946912/AnsiballZ_file.py'
Jan 05 20:58:44 compute-0 sudo[210558]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:45 compute-0 python3.9[210560]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:45 compute-0 sudo[210558]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:45 compute-0 sudo[210710]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jnfsvrcqbmccgkqqhreemrvinjfwbfol ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646725.4451985-1334-12672714525551/AnsiballZ_file.py'
Jan 05 20:58:45 compute-0 sudo[210710]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:46 compute-0 python3.9[210712]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:46 compute-0 sudo[210710]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:46 compute-0 sudo[210875]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yekeblygvjdsjeniooefdxzypqiaalgg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646726.3774295-1342-7396719462683/AnsiballZ_stat.py'
Jan 05 20:58:46 compute-0 sudo[210875]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:46 compute-0 podman[210836]: 2026-01-05 20:58:46.756041504 +0000 UTC m=+0.095312398 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41)
Jan 05 20:58:46 compute-0 python3.9[210885]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/telemetry.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:58:46 compute-0 sudo[210875]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:47 compute-0 sudo[211006]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-aclbybfcllcfxviqxjikstegikuionxh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646726.3774295-1342-7396719462683/AnsiballZ_copy.py'
Jan 05 20:58:47 compute-0 sudo[211006]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:47 compute-0 python3.9[211008]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/telemetry.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646726.3774295-1342-7396719462683/.source.yaml _original_basename=firewall.yaml follow=False checksum=d942d984493b214bda2913f753ff68cdcedff00e backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:47 compute-0 sudo[211006]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:48 compute-0 sudo[211175]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acqfjtkjvvmfirhzxlmsiqfadgkjxvle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646728.0427108-1358-277341328079232/AnsiballZ_file.py'
Jan 05 20:58:48 compute-0 sudo[211175]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:48 compute-0 podman[211132]: 2026-01-05 20:58:48.543267806 +0000 UTC m=+0.180718754 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 05 20:58:48 compute-0 python3.9[211184]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:48 compute-0 sudo[211175]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:49 compute-0 sudo[211336]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-onekfgvujngbetsjpdssbzllmklmkmqm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646728.921632-1366-201435507724293/AnsiballZ_stat.py'
Jan 05 20:58:49 compute-0 sudo[211336]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:49 compute-0 python3.9[211338]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:58:49 compute-0 sudo[211336]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:49 compute-0 sudo[211414]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tuehjhxkwuihnmofjoeuoavptlajqbog ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646728.921632-1366-201435507724293/AnsiballZ_file.py'
Jan 05 20:58:49 compute-0 sudo[211414]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:50 compute-0 python3.9[211416]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:50 compute-0 sudo[211414]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:50 compute-0 sudo[211566]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zlyrithiyvubuzbhorfjrzxkxslojifx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646730.2572808-1378-31545406415042/AnsiballZ_stat.py'
Jan 05 20:58:50 compute-0 sudo[211566]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:51 compute-0 python3.9[211568]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:58:51 compute-0 sudo[211566]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:51 compute-0 sudo[211644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjwvjxkwvzdbtfiiizwuylfosuzsgtap ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646730.2572808-1378-31545406415042/AnsiballZ_file.py'
Jan 05 20:58:51 compute-0 sudo[211644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:51 compute-0 python3.9[211646]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.t0nj72ww recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:51 compute-0 sudo[211644]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:52 compute-0 podman[211770]: 2026-01-05 20:58:52.39367908 +0000 UTC m=+0.058046331 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 05 20:58:52 compute-0 sudo[211828]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bikcddmmnmmlgfjjmnxbeoaggusaxafd ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646731.8173647-1390-150621421343741/AnsiballZ_stat.py'
Jan 05 20:58:52 compute-0 sudo[211828]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:52 compute-0 podman[211771]: 2026-01-05 20:58:52.448067344 +0000 UTC m=+0.100963605 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 20:58:52 compute-0 python3.9[211841]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:58:52 compute-0 sudo[211828]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:53 compute-0 sudo[211917]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apzvwhacmkhtcksccfdxprvtorvetmzk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646731.8173647-1390-150621421343741/AnsiballZ_file.py'
Jan 05 20:58:53 compute-0 sudo[211917]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:53 compute-0 python3.9[211919]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:53 compute-0 sudo[211917]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.463 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.463 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.479 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.480 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.481 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.481 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.481 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.551 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.552 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.552 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.552 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.796 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.797 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5875MB free_disk=72.48274612426758GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.797 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.798 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.878 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.879 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.917 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.943 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.946 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 20:58:53 compute-0 nova_compute[186018]: 2026-01-05 20:58:53.946 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:58:53 compute-0 sudo[212069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xdlgrhsbxjvofrxqovxzabnamahxthiq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646733.5334094-1403-155444451382203/AnsiballZ_command.py'
Jan 05 20:58:53 compute-0 sudo[212069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:54 compute-0 python3.9[212071]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:58:54 compute-0 sudo[212069]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:54 compute-0 nova_compute[186018]: 2026-01-05 20:58:54.926 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:58:54 compute-0 nova_compute[186018]: 2026-01-05 20:58:54.927 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:58:54 compute-0 nova_compute[186018]: 2026-01-05 20:58:54.928 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:58:55 compute-0 sudo[212222]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gnbspaydldzuysslgowbaovusqwhkazy ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646734.475771-1411-191958814041547/AnsiballZ_edpm_nftables_from_files.py'
Jan 05 20:58:55 compute-0 sudo[212222]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:55 compute-0 python3[212224]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 05 20:58:55 compute-0 sudo[212222]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:55 compute-0 nova_compute[186018]: 2026-01-05 20:58:55.457 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:58:56 compute-0 sudo[212374]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uaykiyjosdynddbvkkhwoxnhejhnruxc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646735.5745387-1419-278294507599148/AnsiballZ_stat.py'
Jan 05 20:58:56 compute-0 sudo[212374]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:56 compute-0 python3.9[212376]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:58:56 compute-0 sudo[212374]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:56 compute-0 nova_compute[186018]: 2026-01-05 20:58:56.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:58:56 compute-0 sudo[212452]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kelfeqdvvgjenarnfyrjrnnxlulnmkpn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646735.5745387-1419-278294507599148/AnsiballZ_file.py'
Jan 05 20:58:56 compute-0 sudo[212452]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:56 compute-0 python3.9[212454]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:56 compute-0 sudo[212452]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:57 compute-0 sudo[212604]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-apbgvwdrkxujdssqinylgndjykwvjphr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646737.0533898-1431-274015775317218/AnsiballZ_stat.py'
Jan 05 20:58:57 compute-0 sudo[212604]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:57 compute-0 python3.9[212606]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:58:57 compute-0 sudo[212604]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:58 compute-0 sudo[212682]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dbvfcvahacoqibleeglrjnbswvwwszlb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646737.0533898-1431-274015775317218/AnsiballZ_file.py'
Jan 05 20:58:58 compute-0 sudo[212682]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:58 compute-0 python3.9[212684]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:58 compute-0 sudo[212682]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:58 compute-0 sudo[212834]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fccyluqqvyswkmrbuatoczdkwvlmzsle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646738.5167446-1443-111519103165052/AnsiballZ_stat.py'
Jan 05 20:58:58 compute-0 sudo[212834]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:59 compute-0 python3.9[212836]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:58:59 compute-0 sudo[212834]: pam_unix(sudo:session): session closed for user root
Jan 05 20:58:59 compute-0 sudo[212924]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-byiqcrkxllzopmwsbnomxvgpkwykensi ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646738.5167446-1443-111519103165052/AnsiballZ_file.py'
Jan 05 20:58:59 compute-0 sudo[212924]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:58:59 compute-0 podman[212886]: 2026-01-05 20:58:59.575384234 +0000 UTC m=+0.076768052 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 20:58:59 compute-0 python3.9[212930]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:58:59 compute-0 sudo[212924]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:00 compute-0 sudo[213087]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fhjqafumorkbgvhcdcpmnqqhufeljtpk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646739.9719684-1455-179792691917089/AnsiballZ_stat.py'
Jan 05 20:59:00 compute-0 sudo[213087]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:00 compute-0 python3.9[213089]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:59:00 compute-0 sudo[213087]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:00 compute-0 sudo[213165]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ixgzufoogeaxhcyynrgzxpguzambefxb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646739.9719684-1455-179792691917089/AnsiballZ_file.py'
Jan 05 20:59:00 compute-0 sudo[213165]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:01 compute-0 python3.9[213167]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:59:01 compute-0 sudo[213165]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:01 compute-0 sudo[213317]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tkoccarynhyxnwgvevzrahenwheuzuor ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646741.4526973-1467-267874238380467/AnsiballZ_stat.py'
Jan 05 20:59:01 compute-0 sudo[213317]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:02 compute-0 python3.9[213319]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:59:02 compute-0 sudo[213317]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:02 compute-0 sudo[213442]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwxtxlpqoxaelskjleocbmcmgopjqqbt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646741.4526973-1467-267874238380467/AnsiballZ_copy.py'
Jan 05 20:59:02 compute-0 sudo[213442]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:02 compute-0 python3.9[213444]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646741.4526973-1467-267874238380467/.source.nft follow=False _original_basename=ruleset.j2 checksum=fb3275eced3a2e06312143189928124e1b2df34a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:59:03 compute-0 sudo[213442]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:03 compute-0 sudo[213594]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cndsjksuasrburgdwunypkxkcwewjnci ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646743.2125955-1482-32548089751543/AnsiballZ_file.py'
Jan 05 20:59:03 compute-0 sudo[213594]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:03 compute-0 python3.9[213596]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:59:03 compute-0 sudo[213594]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:04 compute-0 sudo[213746]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bmjdnsdsjyziwaqcthtvejcoyimaouvj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646743.9908998-1490-14357791965410/AnsiballZ_command.py'
Jan 05 20:59:04 compute-0 sudo[213746]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:04 compute-0 python3.9[213748]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:59:04 compute-0 sudo[213746]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:05 compute-0 sudo[213901]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xbwkistzerypzrwwgrggilhkpsbilrjp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646744.8932567-1498-264962972914181/AnsiballZ_blockinfile.py'
Jan 05 20:59:05 compute-0 sudo[213901]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:05 compute-0 python3.9[213903]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:59:05 compute-0 sudo[213901]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:06 compute-0 sudo[214053]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-atxvglyyofgbeccmwzxiqjmuscdncnpp ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646746.0670156-1507-119565929154374/AnsiballZ_command.py'
Jan 05 20:59:06 compute-0 sudo[214053]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:06 compute-0 python3.9[214055]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:59:06 compute-0 sudo[214053]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:07 compute-0 sudo[214206]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xfenxuatgtfmincpiypluzliahsbyldf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646746.9344842-1515-87048642062715/AnsiballZ_stat.py'
Jan 05 20:59:07 compute-0 sudo[214206]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:07 compute-0 python3.9[214208]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:59:07 compute-0 sudo[214206]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.772 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.773 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.773 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.774 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.774 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.779 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.780 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.780 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.781 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.782 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.782 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.789 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.789 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.789 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.790 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.791 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.792 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 20:59:07.793 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 20:59:08 compute-0 sudo[214361]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rjbejurblcncyjkeuwsaxttmcktxfjlu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646747.9004283-1523-203948494413383/AnsiballZ_command.py'
Jan 05 20:59:08 compute-0 sudo[214361]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:08 compute-0 python3.9[214363]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:59:08 compute-0 sudo[214361]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:09 compute-0 sudo[214527]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zmefwansplolxdszcegmlydilcxydanj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646748.877072-1531-185921765909357/AnsiballZ_file.py'
Jan 05 20:59:09 compute-0 sudo[214527]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:09 compute-0 podman[214490]: 2026-01-05 20:59:09.278878999 +0000 UTC m=+0.084985657 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 05 20:59:09 compute-0 python3.9[214535]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:59:09 compute-0 sudo[214527]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:09 compute-0 sshd-session[186318]: Connection closed by 192.168.122.30 port 54272
Jan 05 20:59:09 compute-0 sshd-session[186315]: pam_unix(sshd:session): session closed for user zuul
Jan 05 20:59:09 compute-0 systemd[1]: session-25.scope: Deactivated successfully.
Jan 05 20:59:09 compute-0 systemd[1]: session-25.scope: Consumed 2min 27.521s CPU time.
Jan 05 20:59:09 compute-0 systemd-logind[788]: Session 25 logged out. Waiting for processes to exit.
Jan 05 20:59:09 compute-0 systemd-logind[788]: Removed session 25.
Jan 05 20:59:15 compute-0 sshd-session[214561]: Accepted publickey for zuul from 192.168.122.30 port 55108 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 20:59:15 compute-0 systemd-logind[788]: New session 26 of user zuul.
Jan 05 20:59:15 compute-0 systemd[1]: Started Session 26 of User zuul.
Jan 05 20:59:15 compute-0 sshd-session[214561]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 20:59:16 compute-0 sudo[214714]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-myvqtqrxdvzgmhmdpqrmbtxypimfqpjc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646755.383252-24-103351541380568/AnsiballZ_systemd_service.py'
Jan 05 20:59:16 compute-0 sudo[214714]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:16 compute-0 python3.9[214716]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 20:59:16 compute-0 systemd[1]: Reloading.
Jan 05 20:59:16 compute-0 systemd-sysv-generator[214747]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:59:16 compute-0 systemd-rc-local-generator[214741]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:59:16 compute-0 sudo[214714]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:16 compute-0 podman[214752]: 2026-01-05 20:59:16.906702695 +0000 UTC m=+0.093315165 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, distribution-scope=public, io.openshift.tags=minimal rhel9, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 20:59:17 compute-0 python3.9[214922]: ansible-ansible.builtin.service_facts Invoked
Jan 05 20:59:17 compute-0 network[214939]: You are using 'network' service provided by 'network-scripts', which are now deprecated.
Jan 05 20:59:17 compute-0 network[214940]: 'network-scripts' will be removed from distribution in near future.
Jan 05 20:59:17 compute-0 network[214941]: It is advised to switch to 'NetworkManager' instead for network management.
Jan 05 20:59:18 compute-0 podman[214947]: 2026-01-05 20:59:18.952346727 +0000 UTC m=+0.127370687 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 05 20:59:22 compute-0 podman[215067]: 2026-01-05 20:59:22.527244586 +0000 UTC m=+0.067654453 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 05 20:59:22 compute-0 podman[215080]: 2026-01-05 20:59:22.609099879 +0000 UTC m=+0.089201897 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 20:59:24 compute-0 sudo[215278]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-khapexqxlciliedzhmqftweeccdtjdic ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646763.5261476-47-237402963895668/AnsiballZ_systemd_service.py'
Jan 05 20:59:24 compute-0 sudo[215278]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:24 compute-0 python3.9[215280]: ansible-ansible.builtin.systemd_service Invoked with enabled=False name=tripleo_ceilometer_agent_ipmi.service state=stopped daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 20:59:24 compute-0 sudo[215278]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:25 compute-0 sudo[215431]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gsdhbahxqbtsjdmzhnqflqfscltirmes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646764.8420398-57-100774664551608/AnsiballZ_file.py'
Jan 05 20:59:25 compute-0 sudo[215431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:25 compute-0 python3.9[215433]: ansible-ansible.builtin.file Invoked with path=/usr/lib/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:59:25 compute-0 sudo[215431]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:26 compute-0 sudo[215583]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-txfzhabgqjipydmbcfbpgubylsdtwnzo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646765.9572492-65-5827803827726/AnsiballZ_file.py'
Jan 05 20:59:26 compute-0 sudo[215583]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:26 compute-0 python3.9[215585]: ansible-ansible.builtin.file Invoked with path=/etc/systemd/system/tripleo_ceilometer_agent_ipmi.service state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:59:26 compute-0 sudo[215583]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:27 compute-0 sudo[215735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gwhufrffaecmbwrgathuizcfpconwivt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646766.816905-74-162536121190668/AnsiballZ_command.py'
Jan 05 20:59:27 compute-0 sudo[215735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:27 compute-0 python3.9[215737]: ansible-ansible.legacy.command Invoked with _raw_params=if systemctl is-active certmonger.service; then
                                               systemctl disable --now certmonger.service
                                               test -f /etc/systemd/system/certmonger.service || systemctl mask certmonger.service
                                             fi
                                              _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:59:27 compute-0 sudo[215735]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:28 compute-0 python3.9[215889]: ansible-ansible.builtin.find Invoked with file_type=any hidden=True paths=['/var/lib/certmonger/requests'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 05 20:59:29 compute-0 sudo[216039]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oshonrxniuivjviwqhujxxbqlqraoldz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646769.0108602-92-246468301130790/AnsiballZ_systemd_service.py'
Jan 05 20:59:29 compute-0 sudo[216039]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:29 compute-0 podman[216042]: 2026-01-05 20:59:29.705417624 +0000 UTC m=+0.053951364 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 20:59:29 compute-0 python3.9[216041]: ansible-ansible.builtin.systemd_service Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 20:59:29 compute-0 podman[202426]: time="2026-01-05T20:59:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 20:59:29 compute-0 podman[202426]: @ - - [05/Jan/2026:20:59:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21256 "" "Go-http-client/1.1"
Jan 05 20:59:29 compute-0 systemd[1]: Reloading.
Jan 05 20:59:29 compute-0 podman[202426]: @ - - [05/Jan/2026:20:59:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3002 "" "Go-http-client/1.1"
Jan 05 20:59:29 compute-0 systemd-rc-local-generator[216098]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 20:59:29 compute-0 systemd-sysv-generator[216101]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 20:59:30 compute-0 sudo[216039]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:30 compute-0 sudo[216255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fcbbldiidegdlcqdnepumdhjnboyisdl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646770.3416827-100-268180947536037/AnsiballZ_command.py'
Jan 05 20:59:30 compute-0 sudo[216255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:30 compute-0 python3.9[216257]: ansible-ansible.legacy.command Invoked with cmd=/usr/bin/systemctl reset-failed tripleo_ceilometer_agent_ipmi.service _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True _raw_params=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 20:59:30 compute-0 sudo[216255]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:31 compute-0 openstack_network_exporter[205720]: ERROR   20:59:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 20:59:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 20:59:31 compute-0 openstack_network_exporter[205720]: ERROR   20:59:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 20:59:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 20:59:31 compute-0 sudo[216412]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tyvcjztenzfpynxldwcverqhyxjwtmek ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646771.2591424-109-79907414786474/AnsiballZ_file.py'
Jan 05 20:59:31 compute-0 sudo[216412]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:31 compute-0 python3.9[216414]: ansible-ansible.builtin.file Invoked with group=zuul mode=0750 owner=zuul path=/var/lib/openstack/telemetry-power-monitoring recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:59:31 compute-0 sudo[216412]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:32 compute-0 python3.9[216564]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:59:33 compute-0 python3.9[216716]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:59:34 compute-0 python3.9[216837]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer-host-specific.conf mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646773.1014185-125-200055496520882/.source.conf follow=False _original_basename=ceilometer-host-specific.conf.j2 checksum=e86e0e43000ce9ccfe5aefbf8e8f2e3d15d05584 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:59:35 compute-0 python3.9[216987]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:59:36 compute-0 python3.9[217108]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/firewall.yaml mode=0644 setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646774.9791873-140-111734604661265/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:59:37 compute-0 sudo[217258]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-osdnssucbhukbuewpwditekgvbpkysrj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646776.6278257-158-68721227430604/AnsiballZ_getent.py'
Jan 05 20:59:37 compute-0 sudo[217258]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:37 compute-0 python3.9[217260]: ansible-ansible.builtin.getent Invoked with database=passwd key=ceilometer fail_key=True service=None split=None
Jan 05 20:59:37 compute-0 sudo[217258]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:38 compute-0 python3.9[217411]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:59:39 compute-0 podman[217506]: 2026-01-05 20:59:39.554451679 +0000 UTC m=+0.082942863 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 20:59:39 compute-0 python3.9[217541]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767646778.418585-186-94201398665405/.source.conf _original_basename=ceilometer.conf follow=False checksum=e93ef84feaa07737af66c0c1da2fd4bdcae81d37 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:59:40 compute-0 python3.9[217702]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/polling.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:59:41 compute-0 python3.9[217823]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/polling.yaml mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767646779.8716996-186-76118040340555/.source.yaml _original_basename=polling.yaml follow=False checksum=5ef7021082c6431099dde63e021011029cd65119 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:59:41 compute-0 python3.9[217973]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/custom.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:59:42 compute-0 python3.9[218094]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/custom.conf mode=0640 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767646781.285057-186-37024973598462/.source.conf _original_basename=custom.conf follow=False checksum=838b8b0a7d7f72e55ab67d39f32e3cb3eca2139b backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:59:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:59:42.826 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:59:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:59:42.827 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:59:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 20:59:42.828 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:59:43 compute-0 python3.9[218244]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:59:44 compute-0 python3.9[218396]: ansible-ansible.builtin.stat Invoked with path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 20:59:44 compute-0 python3.9[218548]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:59:45 compute-0 python3.9[218669]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml mode=420 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646784.4429276-245-69041116342362/.source.yaml _original_basename=ceilometer_prom_exporter.yaml follow=False checksum=10157c879411ee6023e506dc85a343cedc52700f backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:59:46 compute-0 sudo[218819]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jrcpfzrnmdkoxyyhzmyxuyxdwzhglmnc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646785.8841603-260-213000167714213/AnsiballZ_file.py'
Jan 05 20:59:46 compute-0 sudo[218819]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:46 compute-0 python3.9[218821]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.crt recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:59:46 compute-0 sudo[218819]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:47 compute-0 sudo[218982]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uiefzqxpmjrrzwsdkkrlxihsxgfulqqk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646786.7721615-268-68411363787392/AnsiballZ_file.py'
Jan 05 20:59:47 compute-0 sudo[218982]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:47 compute-0 podman[218945]: 2026-01-05 20:59:47.182079136 +0000 UTC m=+0.080451446 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-type=git, architecture=x86_64, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc.)
Jan 05 20:59:47 compute-0 python3.9[218984]: ansible-ansible.builtin.file Invoked with group=ceilometer mode=0644 owner=ceilometer path=/var/lib/openstack/certs/telemetry-power-monitoring/default/tls.key recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False state=None _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:59:47 compute-0 sudo[218982]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:48 compute-0 sudo[219145]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ogoduyxwhzdcxwsmuvmxvppdqeaxjvbq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646787.5826645-276-231112036874995/AnsiballZ_file.py'
Jan 05 20:59:48 compute-0 sudo[219145]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:48 compute-0 python3.9[219147]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/openstack/healthchecks setype=container_file_t state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:59:48 compute-0 sudo[219145]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:48 compute-0 sudo[219297]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vghpiuambmvevplzeyizetpzombbvyql ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646788.4954302-284-211640768869373/AnsiballZ_stat.py'
Jan 05 20:59:48 compute-0 sudo[219297]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:49 compute-0 python3.9[219299]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:59:49 compute-0 sudo[219297]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:49 compute-0 sudo[219436]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-agaysjwvbkgsodhchqlnntoyevxvsmwe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646788.4954302-284-211640768869373/AnsiballZ_copy.py'
Jan 05 20:59:49 compute-0 sudo[219436]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:49 compute-0 podman[219394]: 2026-01-05 20:59:49.650814551 +0000 UTC m=+0.121202132 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3)
Jan 05 20:59:49 compute-0 python3.9[219444]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646788.4954302-284-211640768869373/.source _original_basename=healthcheck follow=False checksum=ebb343c21fce35a02591a9351660cb7035a47d42 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:59:49 compute-0 sudo[219436]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:50 compute-0 sudo[219521]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qayevvgcvwsebmilqwoeolayhfpxuyrh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646788.4954302-284-211640768869373/AnsiballZ_stat.py'
Jan 05 20:59:50 compute-0 sudo[219521]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:50 compute-0 python3.9[219523]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/healthcheck.future follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:59:50 compute-0 sudo[219521]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:50 compute-0 sudo[219644]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ydyzwrtxhlfzjghprexconognzkdmrmx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646788.4954302-284-211640768869373/AnsiballZ_copy.py'
Jan 05 20:59:50 compute-0 sudo[219644]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:51 compute-0 python3.9[219646]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646788.4954302-284-211640768869373/.source.future _original_basename=healthcheck.future follow=False checksum=d500a98192f4ddd70b4dfdc059e2d81aed36a294 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:59:51 compute-0 sudo[219644]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:51 compute-0 sudo[219796]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tjoujqbskbtvakaqtksihcyvlgscgspe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646791.3208-284-280286374670652/AnsiballZ_stat.py'
Jan 05 20:59:51 compute-0 sudo[219796]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:52 compute-0 python3.9[219798]: ansible-ansible.legacy.stat Invoked with path=/var/lib/openstack/healthchecks/kepler/healthcheck follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:59:52 compute-0 sudo[219796]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:52 compute-0 sudo[219919]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qeuegqhwdfqmmsujhfmgvoagecnxnaec ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646791.3208-284-280286374670652/AnsiballZ_copy.py'
Jan 05 20:59:52 compute-0 sudo[219919]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:52 compute-0 python3.9[219921]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/openstack/healthchecks/kepler/ group=zuul mode=0700 owner=zuul setype=container_file_t src=/home/zuul/.ansible/tmp/ansible-tmp-1767646791.3208-284-280286374670652/.source _original_basename=healthcheck follow=False checksum=57ed53cc150174efd98819129660d5b9ea9ea61a backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:59:52 compute-0 sudo[219919]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:52 compute-0 podman[219923]: 2026-01-05 20:59:52.749307431 +0000 UTC m=+0.079504991 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 20:59:52 compute-0 podman[219922]: 2026-01-05 20:59:52.79666869 +0000 UTC m=+0.135590898 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 05 20:59:53 compute-0 sudo[220111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-woyxftsvrtnovwdithvvlzzkxkoiezay ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646793.0340033-326-68785972719540/AnsiballZ_file.py'
Jan 05 20:59:53 compute-0 sudo[220111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.482 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.484 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.485 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.485 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.521 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.522 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.523 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.523 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 20:59:53 compute-0 python3.9[220113]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:59:53 compute-0 sudo[220111]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.731 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.731 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5853MB free_disk=72.48187255859375GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.732 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.732 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.804 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.805 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.828 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.841 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.843 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 20:59:53 compute-0 nova_compute[186018]: 2026-01-05 20:59:53.843 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.111s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 20:59:54 compute-0 sudo[220263]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qzenvzieaesxmwimeldgmcrbjhsakpua ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646793.8517563-334-92162862145174/AnsiballZ_file.py'
Jan 05 20:59:54 compute-0 sudo[220263]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:54 compute-0 python3.9[220265]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 20:59:54 compute-0 sudo[220263]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:54 compute-0 nova_compute[186018]: 2026-01-05 20:59:54.819 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:59:54 compute-0 sudo[220415]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bbbsiqvjvpflncgpdqzowgdexxgcxgbh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646794.6075158-342-166496991240522/AnsiballZ_stat.py'
Jan 05 20:59:54 compute-0 sudo[220415]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:55 compute-0 python3.9[220417]: ansible-ansible.legacy.stat Invoked with path=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 20:59:55 compute-0 sudo[220415]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:55 compute-0 nova_compute[186018]: 2026-01-05 20:59:55.456 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:59:55 compute-0 nova_compute[186018]: 2026-01-05 20:59:55.456 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:59:55 compute-0 nova_compute[186018]: 2026-01-05 20:59:55.669 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:59:55 compute-0 sudo[220538]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ypcydwejnjeohhhlbdaguapxbyfezvea ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646794.6075158-342-166496991240522/AnsiballZ_copy.py'
Jan 05 20:59:55 compute-0 sudo[220538]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:55 compute-0 python3.9[220540]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/kolla/config_files/ceilometer_agent_ipmi.json mode=0600 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646794.6075158-342-166496991240522/.source.json _original_basename=.1v9qqz2r follow=False checksum=fa47598aea39469905a43b7b570ec2fd120965fc backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:59:56 compute-0 sudo[220538]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:56 compute-0 nova_compute[186018]: 2026-01-05 20:59:56.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:59:56 compute-0 nova_compute[186018]: 2026-01-05 20:59:56.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:59:56 compute-0 python3.9[220690]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 20:59:57 compute-0 nova_compute[186018]: 2026-01-05 20:59:57.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 20:59:59 compute-0 sudo[221111]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uzbvrtschlgugcostlfmdtegdzlwgvgr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646798.9618347-382-72960226123981/AnsiballZ_container_config_data.py'
Jan 05 20:59:59 compute-0 sudo[221111]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 20:59:59 compute-0 python3.9[221113]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_pattern=*.json debug=False
Jan 05 20:59:59 compute-0 podman[202426]: time="2026-01-05T20:59:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 20:59:59 compute-0 podman[202426]: @ - - [05/Jan/2026:20:59:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 21256 "" "Go-http-client/1.1"
Jan 05 20:59:59 compute-0 sudo[221111]: pam_unix(sudo:session): session closed for user root
Jan 05 20:59:59 compute-0 podman[202426]: @ - - [05/Jan/2026:20:59:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3002 "" "Go-http-client/1.1"
Jan 05 21:00:00 compute-0 podman[221238]: 2026-01-05 21:00:00.740525173 +0000 UTC m=+0.087018047 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:00:00 compute-0 sudo[221281]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wfhwpckozrgurkonnvperhkdzybngzli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646800.1238568-393-46195015083159/AnsiballZ_container_config_hash.py'
Jan 05 21:00:00 compute-0 sudo[221281]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:00 compute-0 python3.9[221290]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 21:00:01 compute-0 sudo[221281]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:01 compute-0 openstack_network_exporter[205720]: ERROR   21:00:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:00:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:00:01 compute-0 openstack_network_exporter[205720]: ERROR   21:00:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:00:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:00:01 compute-0 sudo[221440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rmvnrubfxpibbcglziugrtrlliqsxaqo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646801.2967818-402-269146318037140/AnsiballZ_podman_container_info.py'
Jan 05 21:00:01 compute-0 sudo[221440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:02 compute-0 python3.9[221442]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 05 21:00:02 compute-0 sudo[221440]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:04 compute-0 sudo[221618]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rnomfzjoufnezumfujymcymtqwgvocat ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646803.3925827-415-240842817591388/AnsiballZ_edpm_container_manage.py'
Jan 05 21:00:04 compute-0 sudo[221618]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:04 compute-0 python3[221620]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/ceilometer_agent_ipmi config_id=ceilometer_agent_ipmi config_overrides={} config_patterns=*.json containers=['ceilometer_agent_ipmi'] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 21:00:04 compute-0 podman[221659]: 2026-01-05 21:00:04.655992437 +0000 UTC m=+0.085474647 container create cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 05 21:00:04 compute-0 podman[221659]: 2026-01-05 21:00:04.614639375 +0000 UTC m=+0.044121605 image pull a92f7bca491c0b0ce2687db04282e6791be0613adb46862c56450b0e1308679d quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified
Jan 05 21:00:04 compute-0 python3[221620]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name ceilometer_agent_ipmi --conmon-pidfile /run/ceilometer_agent_ipmi.pid --env KOLLA_CONFIG_STRATEGY=COPY_ALWAYS --env OS_ENDPOINT_TYPE=internal --env EDPM_CONFIG_HASH=ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e --healthcheck-command /openstack/healthcheck ipmi --label config_id=ceilometer_agent_ipmi --label container_name=ceilometer_agent_ipmi --label managed_by=edpm_ansible --label config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --security-opt label:type:ceilometer_polling_t --user ceilometer --volume /var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z --volume /var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z --volume /etc/hosts:/etc/hosts:ro --volume /etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro --volume /etc/localtime:/etc/localtime:ro --volume /etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro --volume /var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z --volume /var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z --volume /var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z --volume /dev/log:/dev/log --volume /var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified kolla_start
Jan 05 21:00:04 compute-0 sudo[221618]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:05 compute-0 sudo[221846]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-htxlfhcbwnthuajkqfaofostdzjdrjrl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646805.139829-423-264940954824716/AnsiballZ_stat.py'
Jan 05 21:00:05 compute-0 sudo[221846]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:05 compute-0 python3.9[221848]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 21:00:05 compute-0 sudo[221846]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:06 compute-0 sudo[222000]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yxwvtazmpumwfjqrvztdgjieisurgupr ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646806.0953944-432-181430881784794/AnsiballZ_file.py'
Jan 05 21:00:06 compute-0 sudo[222000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:06 compute-0 python3.9[222002]: ansible-file Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:00:06 compute-0 sudo[222000]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:07 compute-0 sudo[222076]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-adwloxguzczbnjovpgzsuxmfvemlhlzu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646806.0953944-432-181430881784794/AnsiballZ_stat.py'
Jan 05 21:00:07 compute-0 sudo[222076]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:07 compute-0 python3.9[222078]: ansible-stat Invoked with path=/etc/systemd/system/edpm_ceilometer_agent_ipmi_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 21:00:07 compute-0 sudo[222076]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:07 compute-0 sudo[222227]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mlnxcjubahkrsrntvkvptoynxhynxvqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646807.3349657-432-272620207871213/AnsiballZ_copy.py'
Jan 05 21:00:07 compute-0 sudo[222227]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:08 compute-0 python3.9[222229]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767646807.3349657-432-272620207871213/source dest=/etc/systemd/system/edpm_ceilometer_agent_ipmi.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:00:08 compute-0 sudo[222227]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:08 compute-0 sudo[222303]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ekajiowverrtspefwiesotwulczmaqxl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646807.3349657-432-272620207871213/AnsiballZ_systemd.py'
Jan 05 21:00:08 compute-0 sudo[222303]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:09 compute-0 python3.9[222305]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 21:00:09 compute-0 systemd[1]: Reloading.
Jan 05 21:00:09 compute-0 systemd-sysv-generator[222331]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 21:00:09 compute-0 systemd-rc-local-generator[222326]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 21:00:09 compute-0 sudo[222303]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:09 compute-0 podman[222342]: 2026-01-05 21:00:09.751657887 +0000 UTC m=+0.095497699 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, io.buildah.version=1.41.4)
Jan 05 21:00:09 compute-0 sudo[222435]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gkeyoudhngfkkcbfiywjuqfcahzhcsyt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646807.3349657-432-272620207871213/AnsiballZ_systemd.py'
Jan 05 21:00:09 compute-0 sudo[222435]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:10 compute-0 python3.9[222437]: ansible-systemd Invoked with state=restarted name=edpm_ceilometer_agent_ipmi.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 21:00:10 compute-0 systemd[1]: Reloading.
Jan 05 21:00:10 compute-0 systemd-sysv-generator[222472]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 21:00:10 compute-0 systemd-rc-local-generator[222468]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 21:00:10 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 05 21:00:10 compute-0 systemd[1]: Started libcrun container.
Jan 05 21:00:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99e0e5939fa848169688375dbca3c421c27d765006fe5cb19c8059cdeaee00b4/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 05 21:00:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99e0e5939fa848169688375dbca3c421c27d765006fe5cb19c8059cdeaee00b4/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 05 21:00:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99e0e5939fa848169688375dbca3c421c27d765006fe5cb19c8059cdeaee00b4/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 05 21:00:10 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99e0e5939fa848169688375dbca3c421c27d765006fe5cb19c8059cdeaee00b4/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 05 21:00:10 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f.
Jan 05 21:00:10 compute-0 podman[222478]: 2026-01-05 21:00:10.905740509 +0000 UTC m=+0.194492119 container init cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 21:00:10 compute-0 ceilometer_agent_ipmi[222494]: + sudo -E kolla_set_configs
Jan 05 21:00:10 compute-0 podman[222478]: 2026-01-05 21:00:10.943896347 +0000 UTC m=+0.232647897 container start cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3)
Jan 05 21:00:10 compute-0 podman[222478]: ceilometer_agent_ipmi
Jan 05 21:00:10 compute-0 sudo[222500]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 05 21:00:10 compute-0 sudo[222500]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 05 21:00:10 compute-0 sudo[222500]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 05 21:00:10 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 05 21:00:11 compute-0 sudo[222435]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: INFO:__main__:Validating config file
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: INFO:__main__:Copying service configuration files
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: INFO:__main__:Writing out command to execute
Jan 05 21:00:11 compute-0 sudo[222500]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:11 compute-0 podman[222501]: 2026-01-05 21:00:11.042571619 +0000 UTC m=+0.082761837 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: ++ cat /run_command
Jan 05 21:00:11 compute-0 systemd[1]: cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f-29748190f288a139.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 21:00:11 compute-0 systemd[1]: cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f-29748190f288a139.service: Failed with result 'exit-code'.
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: + ARGS=
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: + sudo kolla_copy_cacerts
Jan 05 21:00:11 compute-0 sudo[222532]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 05 21:00:11 compute-0 sudo[222532]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 05 21:00:11 compute-0 sudo[222532]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 05 21:00:11 compute-0 sudo[222532]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: + [[ ! -n '' ]]
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: + . kolla_extend_start
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: + umask 0022
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.867 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.867 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.867 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.867 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.868 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.868 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.868 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.868 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.868 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.868 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.868 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.868 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.868 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.868 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.869 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.869 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.869 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.869 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.869 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.869 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.869 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.869 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.869 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.869 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.869 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.869 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.870 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.870 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.870 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.870 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.870 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.870 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.870 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.870 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.870 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.870 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.870 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.870 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.871 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.871 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.871 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.871 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.871 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.871 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.871 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.871 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.871 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.871 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.871 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.871 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.872 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.872 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.872 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.872 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.872 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.872 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.872 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.872 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.872 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.872 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.872 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.872 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.873 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.873 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.873 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.873 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.873 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.873 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.873 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.873 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.873 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.874 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.875 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.875 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.875 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.875 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.875 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.875 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.875 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.875 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.875 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.875 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.875 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.876 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.876 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.876 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.876 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.876 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.876 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.876 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.876 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.876 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.876 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.876 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.877 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.877 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.877 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.877 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.877 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.877 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.877 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.877 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.877 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.877 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.878 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.878 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.878 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.878 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.878 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.878 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.878 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.878 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.878 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.878 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.878 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.879 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.879 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.879 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.879 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.880 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.880 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.880 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.881 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.881 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.882 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.902 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.903 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.904 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 05 21:00:11 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:11.992 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpdwnx0fbk/privsep.sock']
Jan 05 21:00:12 compute-0 sudo[222678]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpdwnx0fbk/privsep.sock
Jan 05 21:00:12 compute-0 sudo[222678]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 05 21:00:12 compute-0 sudo[222678]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 05 21:00:12 compute-0 python3.9[222674]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 05 21:00:12 compute-0 sudo[222678]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.663 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.664 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpdwnx0fbk/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.535 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.540 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.542 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.543 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.763 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.764 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.765 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.765 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.765 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.765 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.765 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.765 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.765 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.765 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.765 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.766 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.766 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.769 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.769 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.769 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.769 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.769 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.769 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.769 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.769 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.770 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.770 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.770 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.770 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.770 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.770 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.770 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.770 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.770 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.771 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.771 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.771 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.771 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.771 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.771 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.771 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.771 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.771 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.771 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.771 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.771 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.772 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.772 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.772 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.772 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.772 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.772 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.772 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.772 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.772 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.772 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.772 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.772 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.773 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.773 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.773 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.773 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.773 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.773 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.773 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.773 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.773 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.773 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.773 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.774 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.774 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.774 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.774 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.774 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.774 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.774 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.774 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.774 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.774 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.774 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.774 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.775 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.775 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.775 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.775 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.775 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.775 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.775 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.775 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.775 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.775 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.775 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.776 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.776 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.776 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.776 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.776 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.776 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.776 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.776 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.776 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.776 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.776 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.777 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.778 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.778 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.778 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.778 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.778 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.778 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.778 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.778 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.778 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.778 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.778 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.779 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.779 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.779 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.779 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.779 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.779 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.779 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.779 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.779 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.779 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.779 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.780 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.780 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.780 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.780 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.780 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.780 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.780 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.780 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.780 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.780 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.781 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.781 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.781 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.781 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.781 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.781 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.781 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.781 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.781 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.781 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.782 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.782 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.782 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.782 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.782 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.782 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.782 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.782 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.782 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.782 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.782 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.782 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.783 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.783 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.783 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.783 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.783 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.783 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.783 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.783 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.783 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.783 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.783 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.783 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.784 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.784 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.784 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.784 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.784 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.784 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.784 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.784 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.784 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.784 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.784 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.784 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.785 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.785 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.785 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.785 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.785 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.785 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.785 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.785 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.785 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.785 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.785 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.786 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.786 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.786 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.786 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.786 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.786 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.786 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.786 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.786 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.786 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.786 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.786 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.787 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.787 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.787 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.787 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.787 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 05 21:00:12 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:12.791 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 05 21:00:12 compute-0 sudo[222836]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dnuldwblcuptjhxxnquojbyxqwaelbow ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646812.4901764-473-38997610570994/AnsiballZ_stat.py'
Jan 05 21:00:12 compute-0 sudo[222836]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:13 compute-0 python3.9[222838]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 21:00:13 compute-0 sudo[222836]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:13 compute-0 sudo[222961]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iejhyvdepgrqvohuxesqhbtgnburzqcx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646812.4901764-473-38997610570994/AnsiballZ_copy.py'
Jan 05 21:00:13 compute-0 sudo[222961]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:13 compute-0 python3.9[222963]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646812.4901764-473-38997610570994/.source.yaml _original_basename=.i9ozbv6a follow=False checksum=61b6a2963b5e3c7242dc9b45305e81122bd10df5 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:00:13 compute-0 sudo[222961]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:14 compute-0 sudo[223113]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cjprgmzbmfqrtyswrkdktplcygdefntt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646814.1862473-490-29312009153503/AnsiballZ_file.py'
Jan 05 21:00:14 compute-0 sudo[223113]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:14 compute-0 python3.9[223115]: ansible-ansible.builtin.file Invoked with group=zuul mode=0755 owner=zuul path=/var/lib/edpm-config recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:00:14 compute-0 sudo[223113]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:15 compute-0 sudo[223265]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kfzsyrdplggqajyxyjsnpavprnuunttv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646815.09787-498-260472064059561/AnsiballZ_file.py'
Jan 05 21:00:15 compute-0 sudo[223265]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:15 compute-0 python3.9[223267]: ansible-ansible.builtin.file Invoked with path=/var/lib/kolla/config_files recurse=True setype=container_file_t state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None attributes=None
Jan 05 21:00:15 compute-0 sudo[223265]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:16 compute-0 python3.9[223417]: ansible-ansible.builtin.file Invoked with mode=0755 path=/var/lib/edpm-config/container-startup-config/kepler state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:00:17 compute-0 podman[223541]: 2026-01-05 21:00:17.539844156 +0000 UTC m=+0.107813881 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, container_name=openstack_network_exporter, architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 05 21:00:19 compute-0 sudo[223859]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xzwirpgbkppswrljbtyxdyrbenxypjmo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646818.8444602-532-30919815842826/AnsiballZ_container_config_data.py'
Jan 05 21:00:19 compute-0 sudo[223859]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:19 compute-0 python3.9[223861]: ansible-container_config_data Invoked with config_overrides={} config_path=/var/lib/edpm-config/container-startup-config/kepler config_pattern=*.json debug=False
Jan 05 21:00:19 compute-0 sudo[223859]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:20 compute-0 sudo[224022]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-oxjsctrotgvpvevcyzlhwqrynlxsgxvg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646819.8750412-543-128139395732665/AnsiballZ_container_config_hash.py'
Jan 05 21:00:20 compute-0 sudo[224022]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:20 compute-0 podman[223986]: 2026-01-05 21:00:20.514719743 +0000 UTC m=+0.148641099 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller)
Jan 05 21:00:20 compute-0 python3.9[224034]: ansible-container_config_hash Invoked with check_mode=False config_vol_prefix=/var/lib/openstack
Jan 05 21:00:20 compute-0 sudo[224022]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:21 compute-0 sudo[224191]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cimbaxnomnarloienwszwgokcrpribjh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646821.0480084-552-238550524149259/AnsiballZ_podman_container_info.py'
Jan 05 21:00:21 compute-0 sudo[224191]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:21 compute-0 python3.9[224193]: ansible-containers.podman.podman_container_info Invoked with executable=podman name=None
Jan 05 21:00:21 compute-0 sudo[224191]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:23 compute-0 podman[224343]: 2026-01-05 21:00:23.290711607 +0000 UTC m=+0.088709692 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 05 21:00:23 compute-0 sudo[224403]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dvjzbhanuwulvoqkubdtnsucwsveychn ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646822.8490705-565-26277486972372/AnsiballZ_edpm_container_manage.py'
Jan 05 21:00:23 compute-0 sudo[224403]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:23 compute-0 podman[224344]: 2026-01-05 21:00:23.313173005 +0000 UTC m=+0.108269884 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:00:23 compute-0 python3[224414]: ansible-edpm_container_manage Invoked with concurrency=1 config_dir=/var/lib/edpm-config/container-startup-config/kepler config_id=kepler config_overrides={} config_patterns=*.json containers=['kepler'] log_base_path=/var/log/containers/stdouts debug=False
Jan 05 21:00:23 compute-0 podman[224451]: 2026-01-05 21:00:23.845648325 +0000 UTC m=+0.069070198 container create ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, container_name=kepler, release-0.7.12=, name=ubi9, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., config_id=kepler, architecture=x86_64)
Jan 05 21:00:23 compute-0 podman[224451]: 2026-01-05 21:00:23.810604388 +0000 UTC m=+0.034026261 image pull ed61e3ea3188391c18595d8ceada2a5a01f0ece915c62fde355798735b5208d7 quay.io/sustainable_computing_io/kepler:release-0.7.12
Jan 05 21:00:23 compute-0 python3[224414]: ansible-edpm_container_manage PODMAN-CONTAINER-DEBUG: podman create --name kepler --conmon-pidfile /run/kepler.pid --env ENABLE_GPU=true --env ENABLE_PROCESS_METRICS=true --env EXPOSE_CONTAINER_METRICS=true --env EXPOSE_ESTIMATED_IDLE_POWER_METRICS=false --env EXPOSE_VM_METRICS=true --env LIBVIRT_METADATA_URI=http://openstack.org/xmlns/libvirt/nova/1.1 --healthcheck-command /openstack/healthcheck kepler --label config_id=kepler --label container_name=kepler --label managed_by=edpm_ansible --label config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']} --log-driver journald --log-level info --network host --privileged=True --publish 8888:8888 --volume /lib/modules:/lib/modules:ro --volume /run/libvirt:/run/libvirt:shared,ro --volume /sys:/sys --volume /proc:/proc --volume /var/lib/openstack/healthchecks/kepler:/openstack:ro,z quay.io/sustainable_computing_io/kepler:release-0.7.12 -v=2
Jan 05 21:00:24 compute-0 sudo[224403]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:24 compute-0 sudo[224641]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ztrsblbvdmavfysrxxrcmojmpahzjtet ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646824.3324883-573-278817502915039/AnsiballZ_stat.py'
Jan 05 21:00:24 compute-0 sudo[224641]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:24 compute-0 python3.9[224643]: ansible-ansible.builtin.stat Invoked with path=/etc/sysconfig/podman_drop_in follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 21:00:24 compute-0 sudo[224641]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:25 compute-0 sudo[224795]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-timppqscpudlikslpkuaknysvcdeqgmf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646825.307527-582-237377131113000/AnsiballZ_file.py'
Jan 05 21:00:25 compute-0 sudo[224795]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:25 compute-0 python3.9[224797]: ansible-file Invoked with path=/etc/systemd/system/edpm_kepler.requires state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:00:25 compute-0 sudo[224795]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:26 compute-0 sudo[224871]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fshrfmgisoxehuquhzkamgzecsekszsq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646825.307527-582-237377131113000/AnsiballZ_stat.py'
Jan 05 21:00:26 compute-0 sudo[224871]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:26 compute-0 python3.9[224873]: ansible-stat Invoked with path=/etc/systemd/system/edpm_kepler_healthcheck.timer follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 21:00:26 compute-0 sudo[224871]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:27 compute-0 sudo[225023]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pepjhvywxpvcikwmazzgymlyjgnphrti ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646826.581901-582-123817012571328/AnsiballZ_copy.py'
Jan 05 21:00:27 compute-0 sudo[225023]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:27 compute-0 python3.9[225025]: ansible-copy Invoked with src=/home/zuul/.ansible/tmp/ansible-tmp-1767646826.581901-582-123817012571328/source dest=/etc/systemd/system/edpm_kepler.service mode=0644 owner=root group=root backup=False force=True remote_src=False follow=False unsafe_writes=False _original_basename=None content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None checksum=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:00:27 compute-0 sudo[225023]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:27 compute-0 sudo[225099]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bpwdvjsaizmstugsawexkqkpsedxaxxv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646826.581901-582-123817012571328/AnsiballZ_systemd.py'
Jan 05 21:00:27 compute-0 sudo[225099]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:28 compute-0 python3.9[225101]: ansible-systemd Invoked with daemon_reload=True daemon_reexec=False scope=system no_block=False name=None state=None enabled=None force=None masked=None
Jan 05 21:00:28 compute-0 systemd[1]: Reloading.
Jan 05 21:00:28 compute-0 systemd-rc-local-generator[225121]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 21:00:28 compute-0 systemd-sysv-generator[225127]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 21:00:28 compute-0 sudo[225099]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:28 compute-0 sudo[225210]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hstdicamhvfadqynbbhrfintfctgctgq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646826.581901-582-123817012571328/AnsiballZ_systemd.py'
Jan 05 21:00:28 compute-0 sudo[225210]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:29 compute-0 python3.9[225212]: ansible-systemd Invoked with state=restarted name=edpm_kepler.service enabled=True daemon_reload=False daemon_reexec=False scope=system no_block=False force=None masked=None
Jan 05 21:00:29 compute-0 systemd[1]: Reloading.
Jan 05 21:00:29 compute-0 systemd-rc-local-generator[225239]: /etc/rc.d/rc.local is not marked executable, skipping.
Jan 05 21:00:29 compute-0 systemd-sysv-generator[225244]: SysV service '/etc/rc.d/init.d/network' lacks a native systemd unit file. Automatically generating a unit file for compatibility. Please update package to include a native systemd unit file, in order to make it more safe and robust.
Jan 05 21:00:29 compute-0 systemd[1]: Starting kepler container...
Jan 05 21:00:29 compute-0 systemd[1]: Started libcrun container.
Jan 05 21:00:29 compute-0 podman[202426]: time="2026-01-05T21:00:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:00:29 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928.
Jan 05 21:00:29 compute-0 podman[225253]: 2026-01-05 21:00:29.795388378 +0000 UTC m=+0.192367824 container init ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_id=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, managed_by=edpm_ansible, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, vcs-type=git, architecture=x86_64, name=ubi9, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30)
Jan 05 21:00:29 compute-0 kepler[225268]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 05 21:00:29 compute-0 podman[225253]: 2026-01-05 21:00:29.830360213 +0000 UTC m=+0.227339629 container start ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, config_id=kepler, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, io.openshift.expose-services=, name=ubi9, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, version=9.4, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 05 21:00:29 compute-0 kepler[225268]: I0105 21:00:29.838075       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 05 21:00:29 compute-0 kepler[225268]: I0105 21:00:29.838263       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 05 21:00:29 compute-0 kepler[225268]: I0105 21:00:29.838282       1 config.go:295] kernel version: 5.14
Jan 05 21:00:29 compute-0 kepler[225268]: I0105 21:00:29.839068       1 power.go:78] Unable to obtain power, use estimate method
Jan 05 21:00:29 compute-0 kepler[225268]: I0105 21:00:29.839101       1 redfish.go:169] failed to get redfish credential file path
Jan 05 21:00:29 compute-0 kepler[225268]: I0105 21:00:29.839613       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 05 21:00:29 compute-0 podman[225253]: kepler
Jan 05 21:00:29 compute-0 kepler[225268]: I0105 21:00:29.839623       1 power.go:79] using none to obtain power
Jan 05 21:00:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:00:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27277 "" "Go-http-client/1.1"
Jan 05 21:00:29 compute-0 kepler[225268]: E0105 21:00:29.839643       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 05 21:00:29 compute-0 kepler[225268]: E0105 21:00:29.839676       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 05 21:00:29 compute-0 kepler[225268]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 05 21:00:29 compute-0 kepler[225268]: I0105 21:00:29.842298       1 exporter.go:84] Number of CPUs: 8
Jan 05 21:00:29 compute-0 systemd[1]: Started kepler container.
Jan 05 21:00:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:00:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3831 "" "Go-http-client/1.1"
Jan 05 21:00:29 compute-0 sudo[225210]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:29 compute-0 podman[225278]: 2026-01-05 21:00:29.938077221 +0000 UTC m=+0.088134087 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., io.buildah.version=1.29.0, config_id=kepler, maintainer=Red Hat, Inc., release=1214.1726694543, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Jan 05 21:00:29 compute-0 systemd[1]: ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928-4e1356e9ea8522d9.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 21:00:29 compute-0 systemd[1]: ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928-4e1356e9ea8522d9.service: Failed with result 'exit-code'.
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.392803       1 watcher.go:83] Using in cluster k8s config
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.392858       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 05 21:00:30 compute-0 kepler[225268]: E0105 21:00:30.392920       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.400194       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.400313       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.406977       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.407035       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.419325       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.419389       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.419422       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.431515       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.431592       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.431602       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.431611       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.431621       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.431677       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.431807       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.431846       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.431883       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.431917       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.432077       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 05 21:00:30 compute-0 kepler[225268]: I0105 21:00:30.432697       1 exporter.go:208] Started Kepler in 594.875533ms
Jan 05 21:00:30 compute-0 python3.9[225460]: ansible-ansible.builtin.slurp Invoked with src=/var/lib/edpm-config/deployed_services.yaml
Jan 05 21:00:31 compute-0 openstack_network_exporter[205720]: ERROR   21:00:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:00:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:00:31 compute-0 openstack_network_exporter[205720]: ERROR   21:00:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:00:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:00:31 compute-0 podman[225538]: 2026-01-05 21:00:31.783870089 +0000 UTC m=+0.122181917 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:00:32 compute-0 sudo[225634]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xeimsvperkaeqoiznefhjtjhzfpecjco ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646831.4240484-623-246368959915561/AnsiballZ_stat.py'
Jan 05 21:00:32 compute-0 sudo[225634]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:32 compute-0 python3.9[225636]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/deployed_services.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 21:00:32 compute-0 sudo[225634]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:33 compute-0 sudo[225759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-lsaasrxkokocrvnczcvwvqrhvayvzrue ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646831.4240484-623-246368959915561/AnsiballZ_copy.py'
Jan 05 21:00:33 compute-0 sudo[225759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:33 compute-0 python3.9[225761]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/deployed_services.yaml mode=0644 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646831.4240484-623-246368959915561/.source.yaml _original_basename=.a9bcabf2 follow=False checksum=a077f2f09ef3365b6641834edd68d726e91903a7 backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:00:33 compute-0 sudo[225759]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:34 compute-0 sudo[225911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yogcluhtrkbtmorpjdrldkdghdddetry ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646833.6714566-638-213848521200728/AnsiballZ_systemd.py'
Jan 05 21:00:34 compute-0 sudo[225911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:34 compute-0 python3.9[225913]: ansible-ansible.builtin.systemd Invoked with name=edpm_ceilometer_agent_ipmi.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 21:00:34 compute-0 systemd[1]: Stopping ceilometer_agent_ipmi container...
Jan 05 21:00:34 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:34.765 2 INFO cotyledon._service_manager [-] Caught SIGTERM signal, graceful exiting of master process
Jan 05 21:00:34 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:34.868 2 DEBUG cotyledon._service_manager [-] Killing services with signal SIGTERM _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:304
Jan 05 21:00:34 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:34.869 2 DEBUG cotyledon._service_manager [-] Waiting services to terminate _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:308
Jan 05 21:00:34 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:34.870 12 INFO cotyledon._service [-] Caught SIGTERM signal, graceful exiting of service AgentManager(0) [12]
Jan 05 21:00:34 compute-0 ceilometer_agent_ipmi[222494]: 2026-01-05 21:00:34.890 2 DEBUG cotyledon._service_manager [-] Shutdown finish _shutdown /usr/lib/python3.9/site-packages/cotyledon/_service_manager.py:320
Jan 05 21:00:35 compute-0 systemd[1]: libpod-cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f.scope: Deactivated successfully.
Jan 05 21:00:35 compute-0 systemd[1]: libpod-cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f.scope: Consumed 2.290s CPU time.
Jan 05 21:00:35 compute-0 podman[225917]: 2026-01-05 21:00:35.184402842 +0000 UTC m=+0.508653728 container died cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 05 21:00:35 compute-0 systemd[1]: cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f-29748190f288a139.timer: Deactivated successfully.
Jan 05 21:00:35 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f.
Jan 05 21:00:35 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f-userdata-shm.mount: Deactivated successfully.
Jan 05 21:00:35 compute-0 systemd[1]: var-lib-containers-storage-overlay-99e0e5939fa848169688375dbca3c421c27d765006fe5cb19c8059cdeaee00b4-merged.mount: Deactivated successfully.
Jan 05 21:00:35 compute-0 podman[225917]: 2026-01-05 21:00:35.287208521 +0000 UTC m=+0.611459417 container cleanup cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 05 21:00:35 compute-0 podman[225917]: ceilometer_agent_ipmi
Jan 05 21:00:35 compute-0 podman[225946]: ceilometer_agent_ipmi
Jan 05 21:00:35 compute-0 systemd[1]: edpm_ceilometer_agent_ipmi.service: Deactivated successfully.
Jan 05 21:00:35 compute-0 systemd[1]: Stopped ceilometer_agent_ipmi container.
Jan 05 21:00:35 compute-0 systemd[1]: Starting ceilometer_agent_ipmi container...
Jan 05 21:00:35 compute-0 systemd[1]: Started libcrun container.
Jan 05 21:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99e0e5939fa848169688375dbca3c421c27d765006fe5cb19c8059cdeaee00b4/merged/etc/ceilometer/ceilometer_prom_exporter.yaml supports timestamps until 2038 (0x7fffffff)
Jan 05 21:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99e0e5939fa848169688375dbca3c421c27d765006fe5cb19c8059cdeaee00b4/merged/etc/ceilometer/tls supports timestamps until 2038 (0x7fffffff)
Jan 05 21:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99e0e5939fa848169688375dbca3c421c27d765006fe5cb19c8059cdeaee00b4/merged/var/lib/kolla/config_files/src supports timestamps until 2038 (0x7fffffff)
Jan 05 21:00:35 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/99e0e5939fa848169688375dbca3c421c27d765006fe5cb19c8059cdeaee00b4/merged/var/lib/kolla/config_files/config.json supports timestamps until 2038 (0x7fffffff)
Jan 05 21:00:35 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f.
Jan 05 21:00:35 compute-0 podman[225957]: 2026-01-05 21:00:35.663339551 +0000 UTC m=+0.212143491 container init cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: + sudo -E kolla_set_configs
Jan 05 21:00:35 compute-0 sudo[225978]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_set_configs
Jan 05 21:00:35 compute-0 sudo[225978]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 05 21:00:35 compute-0 sudo[225978]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 05 21:00:35 compute-0 podman[225957]: 2026-01-05 21:00:35.710747271 +0000 UTC m=+0.259551171 container start cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 05 21:00:35 compute-0 podman[225957]: ceilometer_agent_ipmi
Jan 05 21:00:35 compute-0 systemd[1]: Started ceilometer_agent_ipmi container.
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Loading config file at /var/lib/kolla/config_files/config.json
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Validating config file
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Kolla config strategy set to: COPY_ALWAYS
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Copying service configuration files
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer.conf to /etc/ceilometer/ceilometer.conf
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Deleting /etc/ceilometer/polling.yaml
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Copying /var/lib/kolla/config_files/src/polling.yaml to /etc/ceilometer/polling.yaml
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Setting permission for /etc/ceilometer/polling.yaml
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Copying /var/lib/kolla/config_files/src/custom.conf to /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/01-ceilometer-custom.conf
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Deleting /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Copying /var/lib/kolla/config_files/src/ceilometer-host-specific.conf to /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Setting permission for /etc/ceilometer/ceilometer.conf.d/02-ceilometer-host-specific.conf
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: INFO:__main__:Writing out command to execute
Jan 05 21:00:35 compute-0 sudo[225978]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: ++ cat /run_command
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: + CMD='/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: + ARGS=
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: + sudo kolla_copy_cacerts
Jan 05 21:00:35 compute-0 sudo[225911]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:35 compute-0 sudo[225993]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/local/bin/kolla_copy_cacerts
Jan 05 21:00:35 compute-0 sudo[225993]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 05 21:00:35 compute-0 sudo[225993]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 05 21:00:35 compute-0 sudo[225993]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: + [[ ! -n '' ]]
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: + . kolla_extend_start
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: + echo 'Running command: '\''/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'\'''
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: Running command: '/usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout'
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: + umask 0022
Jan 05 21:00:35 compute-0 ceilometer_agent_ipmi[225972]: + exec /usr/bin/ceilometer-polling --polling-namespaces ipmi --logfile /dev/stdout
Jan 05 21:00:35 compute-0 podman[225979]: 2026-01-05 21:00:35.82077086 +0000 UTC m=+0.090675634 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=1, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 05 21:00:35 compute-0 systemd[1]: cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f-1dc45169817025eb.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 21:00:35 compute-0 systemd[1]: cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f-1dc45169817025eb.service: Failed with result 'exit-code'.
Jan 05 21:00:36 compute-0 sudo[226154]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xvmdkdrbzccbidwyanaixxfxdyoxhdiz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646836.0876331-646-176739422962834/AnsiballZ_systemd.py'
Jan 05 21:00:36 compute-0 sudo[226154]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.669 2 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_manager_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:40
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.670 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.670 2 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.670 2 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.670 2 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.670 2 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.670 2 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.670 2 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.670 2 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.671 2 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.671 2 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.671 2 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.671 2 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.671 2 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.671 2 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.671 2 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.671 2 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.671 2 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.672 2 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.672 2 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.672 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.672 2 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.672 2 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.672 2 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.672 2 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.672 2 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.672 2 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.672 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.673 2 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.673 2 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.673 2 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.673 2 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.673 2 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.673 2 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.673 2 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.673 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.673 2 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.673 2 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.673 2 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.673 2 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.674 2 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.674 2 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.674 2 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.674 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.674 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.674 2 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.674 2 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.674 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.674 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.674 2 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.675 2 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.675 2 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.675 2 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.675 2 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.675 2 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.675 2 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.675 2 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.675 2 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.675 2 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.675 2 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.676 2 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.676 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.676 2 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.676 2 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.676 2 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.676 2 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.676 2 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.676 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.676 2 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.676 2 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.676 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.677 2 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.677 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.677 2 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.677 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.677 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.677 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.677 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.677 2 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.677 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.677 2 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.677 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.677 2 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.678 2 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.678 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.678 2 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.678 2 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.678 2 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.678 2 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.678 2 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.678 2 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.678 2 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.678 2 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.679 2 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.679 2 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.679 2 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.679 2 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.679 2 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.679 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.679 2 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.679 2 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.679 2 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.679 2 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.680 2 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.680 2 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.680 2 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.680 2 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.680 2 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.680 2 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.680 2 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.680 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.680 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.680 2 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.680 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.681 2 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.681 2 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.681 2 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.681 2 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.681 2 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.681 2 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.681 2 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.681 2 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.681 2 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.682 2 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.682 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.682 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.682 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.682 2 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.682 2 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.682 2 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.682 2 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.682 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.683 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.683 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.683 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.683 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.683 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.683 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.683 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.683 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.683 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.683 2 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.683 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.684 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.684 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.684 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.684 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.684 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.684 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.684 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.684 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.684 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.684 2 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.684 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.685 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.685 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.685 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.685 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.685 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.685 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.685 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.685 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.685 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.685 2 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.685 2 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.711 12 INFO ceilometer.polling.manager [-] Looking for dynamic pollsters configurations at [['/etc/ceilometer/pollsters.d']].
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.714 12 INFO ceilometer.polling.manager [-] No dynamic pollsters found in folder [/etc/ceilometer/pollsters.d].
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.715 12 INFO ceilometer.polling.manager [-] No dynamic pollsters file found in dirs [['/etc/ceilometer/pollsters.d']].
Jan 05 21:00:36 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:36.741 12 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'ceilometer-rootwrap', '/etc/ceilometer/rootwrap.conf', 'privsep-helper', '--privsep_context', 'ceilometer.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmpg0nq_ye9/privsep.sock']
Jan 05 21:00:36 compute-0 sudo[226161]: ceilometer : PWD=/ ; USER=root ; COMMAND=/usr/bin/ceilometer-rootwrap /etc/ceilometer/rootwrap.conf privsep-helper --privsep_context ceilometer.privsep.sys_admin_pctxt --privsep_sock_path /tmp/tmpg0nq_ye9/privsep.sock
Jan 05 21:00:36 compute-0 sudo[226161]: pam_systemd(sudo:session): Failed to connect to system bus: No such file or directory
Jan 05 21:00:36 compute-0 sudo[226161]: pam_unix(sudo:session): session opened for user root(uid=0) by (uid=42405)
Jan 05 21:00:36 compute-0 python3.9[226156]: ansible-ansible.builtin.systemd Invoked with name=edpm_kepler.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 21:00:37 compute-0 systemd[1]: Stopping kepler container...
Jan 05 21:00:37 compute-0 kepler[225268]: I0105 21:00:37.160542       1 exporter.go:218] Received shutdown signal
Jan 05 21:00:37 compute-0 kepler[225268]: I0105 21:00:37.162482       1 exporter.go:226] Exiting...
Jan 05 21:00:37 compute-0 systemd[1]: libpod-ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928.scope: Deactivated successfully.
Jan 05 21:00:37 compute-0 systemd[1]: libpod-ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928.scope: Consumed 1.115s CPU time.
Jan 05 21:00:37 compute-0 podman[226168]: 2026-01-05 21:00:37.375047682 +0000 UTC m=+0.306551421 container died ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, release=1214.1726694543, com.redhat.component=ubi9-container, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.openshift.tags=base rhel9, version=9.4, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, release-0.7.12=, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Jan 05 21:00:37 compute-0 systemd[1]: ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928-4e1356e9ea8522d9.timer: Deactivated successfully.
Jan 05 21:00:37 compute-0 systemd[1]: Stopped /usr/bin/podman healthcheck run ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928.
Jan 05 21:00:37 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928-userdata-shm.mount: Deactivated successfully.
Jan 05 21:00:37 compute-0 systemd[1]: var-lib-containers-storage-overlay-197aaa5465712de631b0ba281f598ca086173d099f15a33d58b0f89a9525a6c0-merged.mount: Deactivated successfully.
Jan 05 21:00:37 compute-0 podman[226168]: 2026-01-05 21:00:37.438274646 +0000 UTC m=+0.369778395 container cleanup ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, container_name=kepler, architecture=x86_64, build-date=2024-09-18T21:23:30, release-0.7.12=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=kepler, name=ubi9, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 05 21:00:37 compute-0 podman[226168]: kepler
Jan 05 21:00:37 compute-0 sudo[226161]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.455 12 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.456 12 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpg0nq_ye9/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.318 19 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.326 19 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.330 19 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.330 19 INFO oslo.privsep.daemon [-] privsep daemon running as pid 19
Jan 05 21:00:37 compute-0 podman[226196]: kepler
Jan 05 21:00:37 compute-0 systemd[1]: edpm_kepler.service: Deactivated successfully.
Jan 05 21:00:37 compute-0 systemd[1]: Stopped kepler container.
Jan 05 21:00:37 compute-0 systemd[1]: Starting kepler container...
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.582 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.current: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.583 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.fan: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.586 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.airflow: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.587 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cpu_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.587 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.cups: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.587 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.io_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.588 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.mem_util: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.588 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.outlet_temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.588 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.power: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.589 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.node.temperature: object.__new__() takes exactly one argument (the type to instantiate) _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.589 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.temperature: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.590 12 DEBUG ceilometer.polling.manager [-] Skip loading extension for hardware.ipmi.voltage: IPMITool not supported on host _catch_extension_load_error /usr/lib/python3.9/site-packages/ceilometer/polling/manager.py:421
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.590 12 WARNING ceilometer.polling.manager [-] No valid pollsters can be loaded from ['ipmi'] namespaces
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.597 12 DEBUG cotyledon.oslo_config_glue [-] Full set of CONF: _load_service_options /usr/lib/python3.9/site-packages/cotyledon/oslo_config_glue.py:48
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.597 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2589
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.597 12 DEBUG cotyledon.oslo_config_glue [-] Configuration options gathered from: log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2590
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.598 12 DEBUG cotyledon.oslo_config_glue [-] command line args: ['--polling-namespaces', 'ipmi', '--logfile', '/dev/stdout'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2591
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.598 12 DEBUG cotyledon.oslo_config_glue [-] config files: ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2592
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.598 12 DEBUG cotyledon.oslo_config_glue [-] ================================================================================ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2594
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.598 12 DEBUG cotyledon.oslo_config_glue [-] batch_size                     = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.599 12 DEBUG cotyledon.oslo_config_glue [-] cfg_file                       = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.599 12 DEBUG cotyledon.oslo_config_glue [-] config_dir                     = ['/etc/ceilometer/ceilometer.conf.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.599 12 DEBUG cotyledon.oslo_config_glue [-] config_file                    = ['/etc/ceilometer/ceilometer.conf'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.600 12 DEBUG cotyledon.oslo_config_glue [-] config_source                  = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.600 12 DEBUG cotyledon.oslo_config_glue [-] control_exchange               = ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.600 12 DEBUG cotyledon.oslo_config_glue [-] debug                          = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.601 12 DEBUG cotyledon.oslo_config_glue [-] default_log_levels             = ['amqp=WARN', 'amqplib=WARN', 'boto=WARN', 'qpid=WARN', 'sqlalchemy=WARN', 'suds=INFO', 'oslo.messaging=INFO', 'oslo_messaging=INFO', 'iso8601=WARN', 'requests.packages.urllib3.connectionpool=WARN', 'urllib3.connectionpool=WARN', 'websocket=WARN', 'requests.packages.urllib3.util.retry=WARN', 'urllib3.util.retry=WARN', 'keystonemiddleware=WARN', 'routes.middleware=WARN', 'stevedore=WARN', 'taskflow=WARN', 'keystoneauth=WARN', 'oslo.cache=INFO', 'oslo_policy=INFO', 'dogpile.core.dogpile=INFO', 'futurist=INFO', 'neutronclient=INFO', 'keystoneclient=INFO'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.601 12 DEBUG cotyledon.oslo_config_glue [-] event_pipeline_cfg_file        = event_pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.602 12 DEBUG cotyledon.oslo_config_glue [-] graceful_shutdown_timeout      = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.602 12 DEBUG cotyledon.oslo_config_glue [-] host                           = compute-0.ctlplane.example.com log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.602 12 DEBUG cotyledon.oslo_config_glue [-] http_timeout                   = 600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.603 12 DEBUG cotyledon.oslo_config_glue [-] hypervisor_inspector           = libvirt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.603 12 DEBUG cotyledon.oslo_config_glue [-] instance_format                = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.603 12 DEBUG cotyledon.oslo_config_glue [-] instance_uuid_format           = [instance: %(uuid)s]  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.604 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_type                   = kvm log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.604 12 DEBUG cotyledon.oslo_config_glue [-] libvirt_uri                    =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.604 12 DEBUG cotyledon.oslo_config_glue [-] log_config_append              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.604 12 DEBUG cotyledon.oslo_config_glue [-] log_date_format                = %Y-%m-%d %H:%M:%S log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.605 12 DEBUG cotyledon.oslo_config_glue [-] log_dir                        = /var/log/ceilometer log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.605 12 DEBUG cotyledon.oslo_config_glue [-] log_file                       = /dev/stdout log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.605 12 DEBUG cotyledon.oslo_config_glue [-] log_options                    = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.605 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval            = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.605 12 DEBUG cotyledon.oslo_config_glue [-] log_rotate_interval_type       = days log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.606 12 DEBUG cotyledon.oslo_config_glue [-] log_rotation_type              = none log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.606 12 DEBUG cotyledon.oslo_config_glue [-] logging_context_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [%(global_request_id)s %(request_id)s %(user_identity)s] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.606 12 DEBUG cotyledon.oslo_config_glue [-] logging_debug_format_suffix    = %(funcName)s %(pathname)s:%(lineno)d log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.606 12 DEBUG cotyledon.oslo_config_glue [-] logging_default_format_string  = %(asctime)s.%(msecs)03d %(process)d %(levelname)s %(name)s [-] %(instance)s%(message)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.606 12 DEBUG cotyledon.oslo_config_glue [-] logging_exception_prefix       = %(asctime)s.%(msecs)03d %(process)d ERROR %(name)s %(instance)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.607 12 DEBUG cotyledon.oslo_config_glue [-] logging_user_identity_format   = %(user)s %(project)s %(domain)s %(system_scope)s %(user_domain)s %(project_domain)s log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.607 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_count              = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.607 12 DEBUG cotyledon.oslo_config_glue [-] max_logfile_size_mb            = 200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.607 12 DEBUG cotyledon.oslo_config_glue [-] max_parallel_requests          = 64 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.608 12 DEBUG cotyledon.oslo_config_glue [-] partitioning_group_prefix      = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.608 12 DEBUG cotyledon.oslo_config_glue [-] pipeline_cfg_file              = pipeline.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.608 12 DEBUG cotyledon.oslo_config_glue [-] polling_namespaces             = ['ipmi'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.608 12 DEBUG cotyledon.oslo_config_glue [-] pollsters_definitions_dirs     = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.609 12 DEBUG cotyledon.oslo_config_glue [-] publish_errors                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.609 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_burst               = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.609 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_except_level        = CRITICAL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.609 12 DEBUG cotyledon.oslo_config_glue [-] rate_limit_interval            = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.609 12 DEBUG cotyledon.oslo_config_glue [-] reseller_prefix                = AUTH_ log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.610 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_keys         = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.610 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_length       = 256 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.610 12 DEBUG cotyledon.oslo_config_glue [-] reserved_metadata_namespace    = ['metering.'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.610 12 DEBUG cotyledon.oslo_config_glue [-] rootwrap_config                = /etc/ceilometer/rootwrap.conf log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.611 12 DEBUG cotyledon.oslo_config_glue [-] sample_source                  = openstack log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.611 12 DEBUG cotyledon.oslo_config_glue [-] syslog_log_facility            = LOG_USER log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.611 12 DEBUG cotyledon.oslo_config_glue [-] tenant_name_discovery          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.612 12 DEBUG cotyledon.oslo_config_glue [-] transport_url                  = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.612 12 DEBUG cotyledon.oslo_config_glue [-] use_eventlog                   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.612 12 DEBUG cotyledon.oslo_config_glue [-] use_journal                    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.613 12 DEBUG cotyledon.oslo_config_glue [-] use_json                       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.613 12 DEBUG cotyledon.oslo_config_glue [-] use_stderr                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.613 12 DEBUG cotyledon.oslo_config_glue [-] use_syslog                     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.614 12 DEBUG cotyledon.oslo_config_glue [-] watch_log_file                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2602
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.614 12 DEBUG cotyledon.oslo_config_glue [-] compute.instance_discovery_method = libvirt_metadata log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.614 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_cache_expiry  = 3600 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.615 12 DEBUG cotyledon.oslo_config_glue [-] compute.resource_update_interval = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.615 12 DEBUG cotyledon.oslo_config_glue [-] coordination.backend_url       = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.615 12 DEBUG cotyledon.oslo_config_glue [-] event.definitions_cfg_file     = event_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.616 12 DEBUG cotyledon.oslo_config_glue [-] event.drop_unmatched_notifications = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.616 12 DEBUG cotyledon.oslo_config_glue [-] event.store_raw                = [] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.616 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.node_manager_init_retry   = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.616 12 DEBUG cotyledon.oslo_config_glue [-] ipmi.polling_retry             = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.616 12 DEBUG cotyledon.oslo_config_glue [-] meter.meter_definitions_dirs   = ['/etc/ceilometer/meters.d', '/usr/lib/python3.9/site-packages/ceilometer/data/meters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.617 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_on_failure     = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.617 12 DEBUG cotyledon.oslo_config_glue [-] monasca.archive_path           = mon_pub_failures.txt log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.617 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.617 12 DEBUG cotyledon.oslo_config_glue [-] monasca.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.618 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_count            = 1000 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.618 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_max_retries      = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.618 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_mode             = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.618 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_polling_interval = 5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.618 12 DEBUG cotyledon.oslo_config_glue [-] monasca.batch_timeout          = 15 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.619 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.619 12 DEBUG cotyledon.oslo_config_glue [-] monasca.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.619 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_max_retries     = 3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.619 12 DEBUG cotyledon.oslo_config_glue [-] monasca.client_retry_interval  = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.620 12 DEBUG cotyledon.oslo_config_glue [-] monasca.clientapi_version      = 2_0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.620 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cloud_name             = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.620 12 DEBUG cotyledon.oslo_config_glue [-] monasca.cluster                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.620 12 DEBUG cotyledon.oslo_config_glue [-] monasca.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.620 12 DEBUG cotyledon.oslo_config_glue [-] monasca.control_plane          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.621 12 DEBUG cotyledon.oslo_config_glue [-] monasca.enable_api_pagination  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.621 12 DEBUG cotyledon.oslo_config_glue [-] monasca.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.621 12 DEBUG cotyledon.oslo_config_glue [-] monasca.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.621 12 DEBUG cotyledon.oslo_config_glue [-] monasca.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.621 12 DEBUG cotyledon.oslo_config_glue [-] monasca.monasca_mappings       = /etc/ceilometer/monasca_field_definitions.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.622 12 DEBUG cotyledon.oslo_config_glue [-] monasca.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.622 12 DEBUG cotyledon.oslo_config_glue [-] monasca.retry_on_failure       = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.622 12 DEBUG cotyledon.oslo_config_glue [-] monasca.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.622 12 DEBUG cotyledon.oslo_config_glue [-] monasca.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.623 12 DEBUG cotyledon.oslo_config_glue [-] notification.ack_on_event_error = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.623 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_size        = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.623 12 DEBUG cotyledon.oslo_config_glue [-] notification.batch_timeout     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.623 12 DEBUG cotyledon.oslo_config_glue [-] notification.messaging_urls    = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.624 12 DEBUG cotyledon.oslo_config_glue [-] notification.notification_control_exchanges = ['nova', 'glance', 'neutron', 'cinder', 'heat', 'keystone', 'sahara', 'trove', 'zaqar', 'swift', 'ceilometer', 'magnum', 'dns', 'ironic', 'aodh'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.624 12 DEBUG cotyledon.oslo_config_glue [-] notification.pipelines         = ['meter', 'event'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.624 12 DEBUG cotyledon.oslo_config_glue [-] notification.workers           = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.624 12 DEBUG cotyledon.oslo_config_glue [-] polling.batch_size             = 50 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.625 12 DEBUG cotyledon.oslo_config_glue [-] polling.cfg_file               = polling.yaml log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.625 12 DEBUG cotyledon.oslo_config_glue [-] polling.partitioning_group_prefix = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.625 12 DEBUG cotyledon.oslo_config_glue [-] polling.pollsters_definitions_dirs = ['/etc/ceilometer/pollsters.d'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.625 12 DEBUG cotyledon.oslo_config_glue [-] polling.tenant_name_discovery  = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.625 12 DEBUG cotyledon.oslo_config_glue [-] publisher.telemetry_secret     = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.626 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.event_topic = event log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.626 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.metering_topic = metering log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.626 12 DEBUG cotyledon.oslo_config_glue [-] publisher_notifier.telemetry_driver = messagingv2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.627 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.access_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.627 12 DEBUG cotyledon.oslo_config_glue [-] rgw_admin_credentials.secret_key = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.627 12 DEBUG cotyledon.oslo_config_glue [-] rgw_client.implicit_tenants    = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.627 12 DEBUG cotyledon.oslo_config_glue [-] service_types.cinder           = volumev3 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.627 12 DEBUG cotyledon.oslo_config_glue [-] service_types.glance           = image log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.628 12 DEBUG cotyledon.oslo_config_glue [-] service_types.neutron          = network log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.628 12 DEBUG cotyledon.oslo_config_glue [-] service_types.nova             = compute log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.628 12 DEBUG cotyledon.oslo_config_glue [-] service_types.radosgw          = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.628 12 DEBUG cotyledon.oslo_config_glue [-] service_types.swift            = object-store log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.629 12 DEBUG cotyledon.oslo_config_glue [-] vmware.api_retry_count         = 10 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.629 12 DEBUG cotyledon.oslo_config_glue [-] vmware.ca_file                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.629 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_ip                 = 127.0.0.1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.630 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_password           = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.630 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_port               = 443 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.630 12 DEBUG cotyledon.oslo_config_glue [-] vmware.host_username           =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.631 12 DEBUG cotyledon.oslo_config_glue [-] vmware.insecure                = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.631 12 DEBUG cotyledon.oslo_config_glue [-] vmware.task_poll_interval      = 0.5 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.631 12 DEBUG cotyledon.oslo_config_glue [-] vmware.wsdl_location           = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.631 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_section = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.631 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.auth_type  = password log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.632 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.cafile     = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.632 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.certfile   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.632 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.collect_timing = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.632 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.insecure   = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.632 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.interface  = internalURL log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.632 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.keyfile    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.633 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.region_name = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.633 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.split_loggers = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.633 12 DEBUG cotyledon.oslo_config_glue [-] service_credentials.timeout    = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.633 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_section           = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.633 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.auth_type              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.633 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.cafile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.633 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.certfile               = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.633 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.collect_timing         = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.633 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.insecure               = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.interface              = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.keyfile                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.region_name            = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.split_loggers          = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.634 12 DEBUG cotyledon.oslo_config_glue [-] gnocchi.timeout                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.634 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_section             = service_credentials log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.634 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.auth_type                = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.cafile                   = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.certfile                 = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.collect_timing           = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.insecure                 = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.interface                = internal log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.635 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.keyfile                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.636 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.region_name              = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.636 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.split_loggers            = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.636 12 DEBUG cotyledon.oslo_config_glue [-] zaqar.timeout                  = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.636 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.driver = ['noop'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.636 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.retry = -1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.636 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.topics = ['notifications'] log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.636 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_notifications.transport_url = **** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.636 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_auto_delete = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.amqp_durable_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_min_size = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.conn_pool_ttl = 1200 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.direct_mandatory_flag = True log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.enable_cancel_on_failover = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_in_pthread = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_rate = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.637 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.heartbeat_timeout_threshold = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_compression = None log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_failover_strategy = round-robin log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_missing_consumer_retry_timeout = 60 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.kombu_reconnect_delay = 1.0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_ha_queues = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_interval_max = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.638 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_login_method = AMQPLAIN log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_qos_prefetch_count = 100 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_delivery_limit = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_bytes = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_max_memory_length = 0 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_quorum_queue = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_backoff = 2 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_retry_interval = 1 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.639 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rabbit_transient_queues_ttl = 1800 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.rpc_conn_pool_size = 30 log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl      = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_ca_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_cert_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_enforce_fips_mode = False log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_key_file =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.640 12 DEBUG cotyledon.oslo_config_glue [-] oslo_messaging_rabbit.ssl_version =  log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2609
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.641 12 DEBUG cotyledon.oslo_config_glue [-] ******************************************************************************** log_opt_values /usr/lib/python3.9/site-packages/oslo_config/cfg.py:2613
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.641 12 DEBUG cotyledon._service [-] Run service AgentManager(0) [12] wait_forever /usr/lib/python3.9/site-packages/cotyledon/_service.py:241
Jan 05 21:00:37 compute-0 ceilometer_agent_ipmi[225972]: 2026-01-05 21:00:37.646 12 DEBUG ceilometer.agent [-] Config file: {'sources': [{'name': 'pollsters', 'interval': 120, 'meters': ['hardware.*']}]} load_config /usr/lib/python3.9/site-packages/ceilometer/agent.py:64
Jan 05 21:00:37 compute-0 systemd[1]: Started libcrun container.
Jan 05 21:00:37 compute-0 systemd[1]: Started /usr/bin/podman healthcheck run ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928.
Jan 05 21:00:37 compute-0 podman[226211]: 2026-01-05 21:00:37.752402894 +0000 UTC m=+0.174446435 container init ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_id=kepler, name=ubi9, architecture=x86_64, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=)
Jan 05 21:00:37 compute-0 kepler[226227]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 05 21:00:37 compute-0 podman[226211]: 2026-01-05 21:00:37.803485791 +0000 UTC m=+0.225529252 container start ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, managed_by=edpm_ansible, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., config_id=kepler, container_name=kepler, architecture=x86_64, com.redhat.component=ubi9-container)
Jan 05 21:00:37 compute-0 podman[226211]: kepler
Jan 05 21:00:37 compute-0 kepler[226227]: I0105 21:00:37.811178       1 exporter.go:103] Kepler running on version: v0.7.12-dirty
Jan 05 21:00:37 compute-0 kepler[226227]: I0105 21:00:37.811708       1 config.go:293] using gCgroup ID in the BPF program: true
Jan 05 21:00:37 compute-0 kepler[226227]: I0105 21:00:37.811900       1 config.go:295] kernel version: 5.14
Jan 05 21:00:37 compute-0 kepler[226227]: I0105 21:00:37.813006       1 power.go:78] Unable to obtain power, use estimate method
Jan 05 21:00:37 compute-0 kepler[226227]: I0105 21:00:37.813052       1 redfish.go:169] failed to get redfish credential file path
Jan 05 21:00:37 compute-0 kepler[226227]: I0105 21:00:37.814005       1 acpi.go:71] Could not find any ACPI power meter path. Is it a VM?
Jan 05 21:00:37 compute-0 kepler[226227]: I0105 21:00:37.814036       1 power.go:79] using none to obtain power
Jan 05 21:00:37 compute-0 kepler[226227]: E0105 21:00:37.814068       1 accelerator.go:154] [DUMMY] doesn't contain GPU
Jan 05 21:00:37 compute-0 kepler[226227]: E0105 21:00:37.814122       1 exporter.go:154] failed to init GPU accelerators: no devices found
Jan 05 21:00:37 compute-0 kepler[226227]: WARNING: failed to read int from file: open /sys/devices/system/cpu/cpu0/online: no such file or directory
Jan 05 21:00:37 compute-0 kepler[226227]: I0105 21:00:37.818683       1 exporter.go:84] Number of CPUs: 8
Jan 05 21:00:37 compute-0 systemd[1]: Started kepler container.
Jan 05 21:00:37 compute-0 sudo[226154]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:37 compute-0 podman[226237]: 2026-01-05 21:00:37.919168917 +0000 UTC m=+0.093143998 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=starting, health_failing_streak=1, health_log=, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, distribution-scope=public, io.openshift.tags=base rhel9, config_id=kepler, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, release-0.7.12=, architecture=x86_64, io.buildah.version=1.29.0, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler)
Jan 05 21:00:37 compute-0 systemd[1]: ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928-4d6f25f12b6cdef6.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 21:00:37 compute-0 systemd[1]: ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928-4d6f25f12b6cdef6.service: Failed with result 'exit-code'.
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.324280       1 watcher.go:83] Using in cluster k8s config
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.324328       1 watcher.go:90] failed to get config: unable to load in-cluster configuration, KUBERNETES_SERVICE_HOST and KUBERNETES_SERVICE_PORT must be defined
Jan 05 21:00:38 compute-0 kepler[226227]: E0105 21:00:38.324394       1 manager.go:59] could not run the watcher k8s APIserver watcher was not enabled
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.332112       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_TOTAL Power
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.332154       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms]
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.352030       1 process_energy.go:129] Using the Ratio Power Model to estimate PROCESS_COMPONENTS Power
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.352601       1 process_energy.go:130] Feature names: [bpf_cpu_time_ms bpf_cpu_time_ms bpf_cpu_time_ms   gpu_compute_util]
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.371432       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.371771       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.372110       1 node_platform_energy.go:53] Using the Regressor/AbsPower Power Model to estimate Node Platform Power
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.385144       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.385198       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.385206       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.385213       1 regressor.go:276] Created predictor linear for trainer: "SGDRegressorTrainer"
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.385264       1 model.go:125] Requesting for Machine Spec: &{authenticamd amd_epyc_rome 8 8 7 2800 1}
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.385291       1 node_component_energy.go:57] Using the Regressor/AbsPower Power Model to estimate Node Component Power
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.385406       1 prometheus_collector.go:90] Registered Process Prometheus metrics
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.385447       1 prometheus_collector.go:95] Registered Container Prometheus metrics
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.385473       1 prometheus_collector.go:100] Registered VM Prometheus metrics
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.385501       1 prometheus_collector.go:104] Registered Node Prometheus metrics
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.385748       1 exporter.go:194] starting to listen on 0.0.0.0:8888
Jan 05 21:00:38 compute-0 kepler[226227]: I0105 21:00:38.386391       1 exporter.go:208] Started Kepler in 575.683371ms
Jan 05 21:00:38 compute-0 sudo[226422]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sqhffncfjjiesjejfpaskqgryfnrunum ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646838.1109996-654-188326529483559/AnsiballZ_find.py'
Jan 05 21:00:38 compute-0 sudo[226422]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:38 compute-0 python3.9[226424]: ansible-ansible.builtin.find Invoked with file_type=directory paths=['/var/lib/openstack/healthchecks/'] patterns=[] read_whole_file=False age_stamp=mtime recurse=False hidden=False follow=False get_checksum=False checksum_algorithm=sha1 use_regex=False exact_mode=True excludes=None contains=None age=None size=None depth=None mode=None encoding=None limit=None
Jan 05 21:00:38 compute-0 sudo[226422]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:39 compute-0 sudo[226589]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-utmwbcibdajixyfcqsdhjcqtvmgdsbfl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646839.3604295-664-38556412675369/AnsiballZ_podman_container_info.py'
Jan 05 21:00:39 compute-0 sudo[226589]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:39 compute-0 podman[226548]: 2026-01-05 21:00:39.979613621 +0000 UTC m=+0.138236928 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 21:00:40 compute-0 python3.9[226596]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_controller'] executable=podman
Jan 05 21:00:40 compute-0 sudo[226589]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:41 compute-0 sudo[226759]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mefalmibvqjadqesjobioervfemjejcj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646840.6951935-672-157512995605265/AnsiballZ_podman_container_exec.py'
Jan 05 21:00:41 compute-0 sudo[226759]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:41 compute-0 python3.9[226761]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 21:00:42 compute-0 systemd[1]: Started libpod-conmon-8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4.scope.
Jan 05 21:00:42 compute-0 podman[226762]: 2026-01-05 21:00:42.114059172 +0000 UTC m=+0.167719189 container exec 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2)
Jan 05 21:00:42 compute-0 podman[226762]: 2026-01-05 21:00:42.150815483 +0000 UTC m=+0.204475510 container exec_died 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller)
Jan 05 21:00:42 compute-0 sudo[226759]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:42 compute-0 systemd[1]: libpod-conmon-8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4.scope: Deactivated successfully.
Jan 05 21:00:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:00:42.828 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:00:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:00:42.829 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:00:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:00:42.829 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:00:42 compute-0 sudo[226939]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bzefxtquumhnaboyqrblojkqcdnpihrs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646842.516055-680-194904898012367/AnsiballZ_podman_container_exec.py'
Jan 05 21:00:43 compute-0 sudo[226939]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:43 compute-0 python3.9[226941]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_controller detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 21:00:43 compute-0 systemd[1]: Started libpod-conmon-8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4.scope.
Jan 05 21:00:43 compute-0 podman[226942]: 2026-01-05 21:00:43.416051493 +0000 UTC m=+0.135666460 container exec 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, container_name=ovn_controller)
Jan 05 21:00:43 compute-0 podman[226942]: 2026-01-05 21:00:43.424558106 +0000 UTC m=+0.144172983 container exec_died 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 21:00:43 compute-0 sudo[226939]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:43 compute-0 systemd[1]: libpod-conmon-8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4.scope: Deactivated successfully.
Jan 05 21:00:44 compute-0 sudo[227121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-owfyxluwcgftuxfnyufhajcjlkrwylxo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646843.7569947-688-79432034823657/AnsiballZ_file.py'
Jan 05 21:00:44 compute-0 sudo[227121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:44 compute-0 python3.9[227123]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_controller recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:00:44 compute-0 sudo[227121]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:45 compute-0 sudo[227275]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cwycybcpssxtbtyyltituclmkvofvloq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646844.8939135-697-116783723048247/AnsiballZ_podman_container_info.py'
Jan 05 21:00:45 compute-0 sudo[227275]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:45 compute-0 python3.9[227277]: ansible-containers.podman.podman_container_info Invoked with name=['ovn_metadata_agent'] executable=podman
Jan 05 21:00:45 compute-0 sudo[227275]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:46 compute-0 sudo[227440]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvoryxhrynrdomvyajxtcowacnabvfnt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646846.3110108-705-111017183795876/AnsiballZ_podman_container_exec.py'
Jan 05 21:00:46 compute-0 sudo[227440]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:47 compute-0 python3.9[227442]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 21:00:47 compute-0 systemd[1]: Started libpod-conmon-490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39.scope.
Jan 05 21:00:47 compute-0 podman[227443]: 2026-01-05 21:00:47.321619309 +0000 UTC m=+0.152321667 container exec 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 05 21:00:47 compute-0 podman[227443]: 2026-01-05 21:00:47.355139486 +0000 UTC m=+0.185841754 container exec_died 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 05 21:00:47 compute-0 systemd[1]: libpod-conmon-490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39.scope: Deactivated successfully.
Jan 05 21:00:47 compute-0 sudo[227440]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:47 compute-0 podman[227496]: 2026-01-05 21:00:47.832696299 +0000 UTC m=+0.159266697 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=openstack_network_exporter, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.expose-services=, release=1755695350, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64)
Jan 05 21:00:48 compute-0 sudo[227643]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-uhiksmsxuccawjyihefyskgzlhrlecsj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646847.725649-713-90652916186343/AnsiballZ_podman_container_exec.py'
Jan 05 21:00:48 compute-0 sudo[227643]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:48 compute-0 python3.9[227645]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ovn_metadata_agent detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 21:00:48 compute-0 systemd[1]: Started libpod-conmon-490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39.scope.
Jan 05 21:00:48 compute-0 podman[227646]: 2026-01-05 21:00:48.659179339 +0000 UTC m=+0.149422073 container exec 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 05 21:00:48 compute-0 podman[227646]: 2026-01-05 21:00:48.694758916 +0000 UTC m=+0.185001590 container exec_died 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 05 21:00:48 compute-0 sudo[227643]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:48 compute-0 systemd[1]: libpod-conmon-490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39.scope: Deactivated successfully.
Jan 05 21:00:49 compute-0 sudo[227827]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-royvskzntenlngfcooycfjkcafwaabkf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646849.037007-721-155354642927733/AnsiballZ_file.py'
Jan 05 21:00:49 compute-0 sudo[227827]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:49 compute-0 python3.9[227829]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/ovn_metadata_agent recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:00:49 compute-0 sudo[227827]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:50 compute-0 sudo[227979]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zxwlckvkfiqbblkyollpoiklgtumaasa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646850.1441326-730-6020218203141/AnsiballZ_podman_container_info.py'
Jan 05 21:00:50 compute-0 sudo[227979]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:50 compute-0 podman[227981]: 2026-01-05 21:00:50.806005806 +0000 UTC m=+0.151938828 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 05 21:00:50 compute-0 python3.9[227982]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_compute'] executable=podman
Jan 05 21:00:50 compute-0 sudo[227979]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:51 compute-0 sudo[228167]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-shauijiegncmvljexyxzeshzompigvgh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646851.229073-738-269012379742863/AnsiballZ_podman_container_exec.py'
Jan 05 21:00:51 compute-0 sudo[228167]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:51 compute-0 python3.9[228169]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 21:00:52 compute-0 systemd[1]: Started libpod-conmon-dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2.scope.
Jan 05 21:00:52 compute-0 podman[228170]: 2026-01-05 21:00:52.125106815 +0000 UTC m=+0.147595856 container exec dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 05 21:00:52 compute-0 podman[228170]: 2026-01-05 21:00:52.161041681 +0000 UTC m=+0.183530712 container exec_died dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 05 21:00:52 compute-0 systemd[1]: libpod-conmon-dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2.scope: Deactivated successfully.
Jan 05 21:00:52 compute-0 sudo[228167]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:52 compute-0 nova_compute[186018]: 2026-01-05 21:00:52.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:00:52 compute-0 nova_compute[186018]: 2026-01-05 21:00:52.463 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 21:00:52 compute-0 nova_compute[186018]: 2026-01-05 21:00:52.493 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 05 21:00:52 compute-0 nova_compute[186018]: 2026-01-05 21:00:52.495 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:00:52 compute-0 nova_compute[186018]: 2026-01-05 21:00:52.496 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 21:00:52 compute-0 nova_compute[186018]: 2026-01-05 21:00:52.513 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:00:53 compute-0 nova_compute[186018]: 2026-01-05 21:00:53.523 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:00:53 compute-0 nova_compute[186018]: 2026-01-05 21:00:53.525 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:00:53 compute-0 sudo[228350]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mymhzqmsabshlzxadbkznhxbkqzlpxni ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646852.5068142-746-16533179235653/AnsiballZ_podman_container_exec.py'
Jan 05 21:00:53 compute-0 sudo[228350]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:53 compute-0 podman[228353]: 2026-01-05 21:00:53.874414918 +0000 UTC m=+0.093640580 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:00:53 compute-0 podman[228352]: 2026-01-05 21:00:53.877386386 +0000 UTC m=+0.102653245 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:00:54 compute-0 python3.9[228360]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_compute detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 21:00:54 compute-0 systemd[1]: Started libpod-conmon-dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2.scope.
Jan 05 21:00:54 compute-0 podman[228392]: 2026-01-05 21:00:54.184755662 +0000 UTC m=+0.126250110 container exec dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:00:54 compute-0 podman[228392]: 2026-01-05 21:00:54.220455662 +0000 UTC m=+0.161950150 container exec_died dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute)
Jan 05 21:00:54 compute-0 systemd[1]: libpod-conmon-dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2.scope: Deactivated successfully.
Jan 05 21:00:54 compute-0 sudo[228350]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:54 compute-0 nova_compute[186018]: 2026-01-05 21:00:54.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:00:54 compute-0 nova_compute[186018]: 2026-01-05 21:00:54.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:00:54 compute-0 nova_compute[186018]: 2026-01-05 21:00:54.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:00:54 compute-0 nova_compute[186018]: 2026-01-05 21:00:54.489 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 21:00:54 compute-0 nova_compute[186018]: 2026-01-05 21:00:54.491 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:00:54 compute-0 nova_compute[186018]: 2026-01-05 21:00:54.524 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:00:54 compute-0 nova_compute[186018]: 2026-01-05 21:00:54.525 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:00:54 compute-0 nova_compute[186018]: 2026-01-05 21:00:54.526 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:00:54 compute-0 nova_compute[186018]: 2026-01-05 21:00:54.527 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:00:55 compute-0 nova_compute[186018]: 2026-01-05 21:00:55.042 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:00:55 compute-0 nova_compute[186018]: 2026-01-05 21:00:55.044 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5697MB free_disk=72.48026657104492GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:00:55 compute-0 nova_compute[186018]: 2026-01-05 21:00:55.044 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:00:55 compute-0 nova_compute[186018]: 2026-01-05 21:00:55.045 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:00:55 compute-0 sudo[228573]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-auhnytkdqjlsjrsopkqcjawfazzpgnlq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646854.629955-754-27480511088634/AnsiballZ_file.py'
Jan 05 21:00:55 compute-0 sudo[228573]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:55 compute-0 nova_compute[186018]: 2026-01-05 21:00:55.256 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:00:55 compute-0 nova_compute[186018]: 2026-01-05 21:00:55.257 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:00:55 compute-0 python3.9[228575]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_compute recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:00:55 compute-0 nova_compute[186018]: 2026-01-05 21:00:55.404 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing inventories for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 21:00:55 compute-0 sudo[228573]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:55 compute-0 nova_compute[186018]: 2026-01-05 21:00:55.513 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating ProviderTree inventory for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 21:00:55 compute-0 nova_compute[186018]: 2026-01-05 21:00:55.513 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating inventory in ProviderTree for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 21:00:55 compute-0 nova_compute[186018]: 2026-01-05 21:00:55.537 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing aggregate associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 21:00:55 compute-0 nova_compute[186018]: 2026-01-05 21:00:55.577 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing trait associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_NODE,HW_CPU_X86_BMI,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 21:00:55 compute-0 nova_compute[186018]: 2026-01-05 21:00:55.607 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:00:55 compute-0 nova_compute[186018]: 2026-01-05 21:00:55.628 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:00:55 compute-0 nova_compute[186018]: 2026-01-05 21:00:55.632 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:00:55 compute-0 nova_compute[186018]: 2026-01-05 21:00:55.632 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.588s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:00:56 compute-0 sudo[228725]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-rhtlplqhrgbkapftwyaitwaacgcxdjfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646855.7891188-763-101678577496398/AnsiballZ_podman_container_info.py'
Jan 05 21:00:56 compute-0 sudo[228725]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:56 compute-0 python3.9[228727]: ansible-containers.podman.podman_container_info Invoked with name=['node_exporter'] executable=podman
Jan 05 21:00:56 compute-0 nova_compute[186018]: 2026-01-05 21:00:56.603 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:00:56 compute-0 nova_compute[186018]: 2026-01-05 21:00:56.603 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:00:56 compute-0 nova_compute[186018]: 2026-01-05 21:00:56.604 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:00:56 compute-0 nova_compute[186018]: 2026-01-05 21:00:56.604 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:00:56 compute-0 sudo[228725]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:57 compute-0 sudo[228890]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gyeuowfwhzvbqbfnwvisjlrhlzzzkggl ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646857.0164394-771-176166353350048/AnsiballZ_podman_container_exec.py'
Jan 05 21:00:57 compute-0 sudo[228890]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:57 compute-0 python3.9[228892]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 21:00:57 compute-0 systemd[1]: Started libpod-conmon-b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4.scope.
Jan 05 21:00:57 compute-0 podman[228893]: 2026-01-05 21:00:57.953390592 +0000 UTC m=+0.111716210 container exec b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:00:57 compute-0 podman[228893]: 2026-01-05 21:00:57.987802809 +0000 UTC m=+0.146128417 container exec_died b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:00:58 compute-0 systemd[1]: libpod-conmon-b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4.scope: Deactivated successfully.
Jan 05 21:00:58 compute-0 sudo[228890]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:58 compute-0 nova_compute[186018]: 2026-01-05 21:00:58.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:00:58 compute-0 nova_compute[186018]: 2026-01-05 21:00:58.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:00:58 compute-0 sudo[229071]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-evbzbefdtwqifnnhowoijyngpjqtfxqt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646858.3210242-779-131061412463044/AnsiballZ_podman_container_exec.py'
Jan 05 21:00:58 compute-0 sudo[229071]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:00:59 compute-0 python3.9[229073]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=node_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 21:00:59 compute-0 systemd[1]: Started libpod-conmon-b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4.scope.
Jan 05 21:00:59 compute-0 podman[229074]: 2026-01-05 21:00:59.310580723 +0000 UTC m=+0.156078356 container exec b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:00:59 compute-0 podman[229074]: 2026-01-05 21:00:59.346700414 +0000 UTC m=+0.192198047 container exec_died b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:00:59 compute-0 sudo[229071]: pam_unix(sudo:session): session closed for user root
Jan 05 21:00:59 compute-0 systemd[1]: libpod-conmon-b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4.scope: Deactivated successfully.
Jan 05 21:00:59 compute-0 podman[202426]: time="2026-01-05T21:00:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:00:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:00:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27276 "" "Go-http-client/1.1"
Jan 05 21:00:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:00:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3841 "" "Go-http-client/1.1"
Jan 05 21:01:00 compute-0 sudo[229255]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qcypquneoqqksdptaiesoyowxlfwxdiu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646859.6857808-787-223809492904880/AnsiballZ_file.py'
Jan 05 21:01:00 compute-0 sudo[229255]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:00 compute-0 python3.9[229257]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/node_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:00 compute-0 sudo[229255]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:01 compute-0 sudo[229407]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iwjskgblfigfplpdcnpenatmnsesvipo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646860.8376243-796-242938935627048/AnsiballZ_podman_container_info.py'
Jan 05 21:01:01 compute-0 openstack_network_exporter[205720]: ERROR   21:01:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:01:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:01:01 compute-0 openstack_network_exporter[205720]: ERROR   21:01:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:01:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:01:01 compute-0 sudo[229407]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:01 compute-0 python3.9[229409]: ansible-containers.podman.podman_container_info Invoked with name=['podman_exporter'] executable=podman
Jan 05 21:01:01 compute-0 CROND[229421]: (root) CMD (run-parts /etc/cron.hourly)
Jan 05 21:01:01 compute-0 sudo[229407]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:01 compute-0 run-parts[229424]: (/etc/cron.hourly) starting 0anacron
Jan 05 21:01:01 compute-0 anacron[229436]: Anacron started on 2026-01-05
Jan 05 21:01:01 compute-0 anacron[229436]: Will run job `cron.daily' in 33 min.
Jan 05 21:01:01 compute-0 anacron[229436]: Will run job `cron.weekly' in 53 min.
Jan 05 21:01:01 compute-0 anacron[229436]: Will run job `cron.monthly' in 73 min.
Jan 05 21:01:01 compute-0 anacron[229436]: Jobs will be executed sequentially
Jan 05 21:01:01 compute-0 run-parts[229440]: (/etc/cron.hourly) finished 0anacron
Jan 05 21:01:01 compute-0 CROND[229420]: (root) CMDEND (run-parts /etc/cron.hourly)
Jan 05 21:01:02 compute-0 sudo[229601]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cvoemoawytlwsfiefcspcvmwguubokgt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646862.1753376-804-82420955439475/AnsiballZ_podman_container_exec.py'
Jan 05 21:01:02 compute-0 sudo[229601]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:02 compute-0 podman[229558]: 2026-01-05 21:01:02.759541988 +0000 UTC m=+0.138830198 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 21:01:02 compute-0 python3.9[229611]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 21:01:03 compute-0 systemd[1]: Started libpod-conmon-8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094.scope.
Jan 05 21:01:03 compute-0 podman[229612]: 2026-01-05 21:01:03.114263497 +0000 UTC m=+0.132986285 container exec 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:01:03 compute-0 podman[229612]: 2026-01-05 21:01:03.147258977 +0000 UTC m=+0.165981725 container exec_died 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:01:03 compute-0 sudo[229601]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:03 compute-0 systemd[1]: libpod-conmon-8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094.scope: Deactivated successfully.
Jan 05 21:01:03 compute-0 sudo[229792]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ibuhwoglnotoofsfwrcwxtvnrlodjawe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646863.4582844-812-217918452507547/AnsiballZ_podman_container_exec.py'
Jan 05 21:01:04 compute-0 sudo[229792]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:04 compute-0 python3.9[229794]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=podman_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 21:01:04 compute-0 systemd[1]: Started libpod-conmon-8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094.scope.
Jan 05 21:01:04 compute-0 podman[229795]: 2026-01-05 21:01:04.394996565 +0000 UTC m=+0.133679243 container exec 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:01:04 compute-0 podman[229795]: 2026-01-05 21:01:04.431590028 +0000 UTC m=+0.170272626 container exec_died 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:01:04 compute-0 sudo[229792]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:04 compute-0 systemd[1]: libpod-conmon-8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094.scope: Deactivated successfully.
Jan 05 21:01:05 compute-0 sudo[229975]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-avgbtrcttngpqvfuisgryvuddmeznxla ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646864.831047-820-250814928820004/AnsiballZ_file.py'
Jan 05 21:01:05 compute-0 sudo[229975]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:05 compute-0 python3.9[229977]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/podman_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:05 compute-0 sudo[229975]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:06 compute-0 sudo[230138]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elpfbfmyimvxtqnzcgfzldozssupqpnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646865.964483-829-281018651675902/AnsiballZ_podman_container_info.py'
Jan 05 21:01:06 compute-0 sudo[230138]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:06 compute-0 podman[230101]: 2026-01-05 21:01:06.562084082 +0000 UTC m=+0.112545883 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=starting, health_failing_streak=2, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 05 21:01:06 compute-0 systemd[1]: cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f-1dc45169817025eb.service: Main process exited, code=exited, status=1/FAILURE
Jan 05 21:01:06 compute-0 systemd[1]: cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f-1dc45169817025eb.service: Failed with result 'exit-code'.
Jan 05 21:01:06 compute-0 python3.9[230144]: ansible-containers.podman.podman_container_info Invoked with name=['openstack_network_exporter'] executable=podman
Jan 05 21:01:06 compute-0 sudo[230138]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.775 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.776 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.776 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.778 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.784 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.801 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.806 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.807 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.808 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:01:07.808 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:01:07 compute-0 sudo[230310]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-blgnaniptvlsqtbswwjuacnbuausvtov ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646867.2508478-837-87713112770249/AnsiballZ_podman_container_exec.py'
Jan 05 21:01:07 compute-0 sudo[230310]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:08 compute-0 python3.9[230312]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 21:01:08 compute-0 systemd[1]: Started libpod-conmon-aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb.scope.
Jan 05 21:01:08 compute-0 podman[230313]: 2026-01-05 21:01:08.328920341 +0000 UTC m=+0.171786205 container exec aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, release=1755695350, version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 05 21:01:08 compute-0 podman[230313]: 2026-01-05 21:01:08.337991368 +0000 UTC m=+0.180857232 container exec_died aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., version=9.6, config_id=openstack_network_exporter, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, release=1755695350, managed_by=edpm_ansible, io.buildah.version=1.33.7, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:01:08 compute-0 sudo[230310]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:08 compute-0 systemd[1]: libpod-conmon-aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb.scope: Deactivated successfully.
Jan 05 21:01:08 compute-0 podman[230328]: 2026-01-05 21:01:08.478250841 +0000 UTC m=+0.130047608 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, name=ubi9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.component=ubi9-container, release-0.7.12=, config_id=kepler, distribution-scope=public)
Jan 05 21:01:09 compute-0 sudo[230509]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dylgjxupfmjmmktcfievipqgtbwvxjnn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646868.6916578-845-146651258564667/AnsiballZ_podman_container_exec.py'
Jan 05 21:01:09 compute-0 sudo[230509]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:09 compute-0 python3.9[230511]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=openstack_network_exporter detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 21:01:09 compute-0 systemd[1]: Started libpod-conmon-aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb.scope.
Jan 05 21:01:09 compute-0 podman[230512]: 2026-01-05 21:01:09.519280507 +0000 UTC m=+0.133281213 container exec aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, vendor=Red Hat, Inc., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:01:09 compute-0 podman[230512]: 2026-01-05 21:01:09.552595125 +0000 UTC m=+0.166595811 container exec_died aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, version=9.6, name=ubi9-minimal, vendor=Red Hat, Inc., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, release=1755695350, config_id=openstack_network_exporter, maintainer=Red Hat, Inc.)
Jan 05 21:01:09 compute-0 sudo[230509]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:09 compute-0 systemd[1]: libpod-conmon-aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb.scope: Deactivated successfully.
Jan 05 21:01:10 compute-0 sudo[230707]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-obkqbmagmzazsptclfnvwhauytzkyfms ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646869.9035468-853-225241091734180/AnsiballZ_file.py'
Jan 05 21:01:10 compute-0 sudo[230707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:10 compute-0 podman[230666]: 2026-01-05 21:01:10.508924834 +0000 UTC m=+0.149577387 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, tcib_managed=true, container_name=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224)
Jan 05 21:01:10 compute-0 python3.9[230712]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/openstack_network_exporter recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:10 compute-0 sudo[230707]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:11 compute-0 sudo[230862]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ftjufvopxelgamrfsdevyoulhmpltsuc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646871.013884-862-157954024055466/AnsiballZ_podman_container_info.py'
Jan 05 21:01:11 compute-0 sudo[230862]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:11 compute-0 python3.9[230864]: ansible-containers.podman.podman_container_info Invoked with name=['ceilometer_agent_ipmi'] executable=podman
Jan 05 21:01:11 compute-0 sudo[230862]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:12 compute-0 sudo[231027]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-djgbqzmndrivnipcpajesozvfatqbigw ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646872.1633925-870-234628833090422/AnsiballZ_podman_container_exec.py'
Jan 05 21:01:12 compute-0 sudo[231027]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:12 compute-0 python3.9[231029]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 21:01:13 compute-0 systemd[1]: Started libpod-conmon-cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f.scope.
Jan 05 21:01:13 compute-0 podman[231030]: 2026-01-05 21:01:13.209051484 +0000 UTC m=+0.178337646 container exec cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 21:01:13 compute-0 podman[231030]: 2026-01-05 21:01:13.242926386 +0000 UTC m=+0.212212478 container exec_died cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 05 21:01:13 compute-0 sudo[231027]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:13 compute-0 systemd[1]: libpod-conmon-cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f.scope: Deactivated successfully.
Jan 05 21:01:14 compute-0 sudo[231211]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-vwvkwtbdqthlxdwdxkazewfumpervuiy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646873.6394613-878-104305298214144/AnsiballZ_podman_container_exec.py'
Jan 05 21:01:14 compute-0 sudo[231211]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:14 compute-0 python3.9[231213]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=ceilometer_agent_ipmi detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 21:01:14 compute-0 systemd[1]: Started libpod-conmon-cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f.scope.
Jan 05 21:01:14 compute-0 podman[231214]: 2026-01-05 21:01:14.65706855 +0000 UTC m=+0.169086435 container exec cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 05 21:01:14 compute-0 podman[231214]: 2026-01-05 21:01:14.690038288 +0000 UTC m=+0.202056143 container exec_died cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 05 21:01:14 compute-0 sudo[231211]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:14 compute-0 systemd[1]: libpod-conmon-cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f.scope: Deactivated successfully.
Jan 05 21:01:15 compute-0 sudo[231393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mvihthncxlhwkfqaeheyzkmztfmxommq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646875.013486-886-61140361061912/AnsiballZ_file.py'
Jan 05 21:01:15 compute-0 sudo[231393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:15 compute-0 python3.9[231395]: ansible-ansible.builtin.file Invoked with group=42405 mode=0700 owner=42405 path=/var/lib/openstack/healthchecks/ceilometer_agent_ipmi recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:15 compute-0 sudo[231393]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:16 compute-0 sudo[231545]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ksrnbezfamrszzgrxvnnztnwmnnuxyyk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646876.2484336-895-90186323958501/AnsiballZ_podman_container_info.py'
Jan 05 21:01:16 compute-0 sudo[231545]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:17 compute-0 python3.9[231547]: ansible-containers.podman.podman_container_info Invoked with name=['kepler'] executable=podman
Jan 05 21:01:17 compute-0 sudo[231545]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:18 compute-0 sudo[231722]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ckcaohfcuoeldbyrafmvtyvfddgzkwxm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646877.5122375-903-166889142212990/AnsiballZ_podman_container_exec.py'
Jan 05 21:01:18 compute-0 sudo[231722]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:18 compute-0 podman[231682]: 2026-01-05 21:01:18.147836515 +0000 UTC m=+0.154449474 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, container_name=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, vendor=Red Hat, Inc., config_id=openstack_network_exporter, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc.)
Jan 05 21:01:18 compute-0 python3.9[231729]: ansible-containers.podman.podman_container_exec Invoked with command=id -u name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 21:01:18 compute-0 systemd[1]: Started libpod-conmon-ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928.scope.
Jan 05 21:01:18 compute-0 podman[231732]: 2026-01-05 21:01:18.569009674 +0000 UTC m=+0.154510245 container exec ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, version=9.4, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.openshift.tags=base rhel9, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, container_name=kepler, com.redhat.component=ubi9-container, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, release=1214.1726694543, architecture=x86_64)
Jan 05 21:01:18 compute-0 podman[231732]: 2026-01-05 21:01:18.606754277 +0000 UTC m=+0.192254858 container exec_died ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9, managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, name=ubi9, release-0.7.12=, release=1214.1726694543, architecture=x86_64, maintainer=Red Hat, Inc., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, container_name=kepler, distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc.)
Jan 05 21:01:18 compute-0 systemd[1]: libpod-conmon-ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928.scope: Deactivated successfully.
Jan 05 21:01:18 compute-0 sudo[231722]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:19 compute-0 sudo[231911]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xtjrndrhbvhnfbcajfcgktscyctjceus ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646879.0120916-911-144345046691358/AnsiballZ_podman_container_exec.py'
Jan 05 21:01:19 compute-0 sudo[231911]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:19 compute-0 python3.9[231913]: ansible-containers.podman.podman_container_exec Invoked with command=id -g name=kepler detach=False executable=podman privileged=False tty=False argv=None env=None user=None workdir=None
Jan 05 21:01:19 compute-0 systemd[1]: Started libpod-conmon-ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928.scope.
Jan 05 21:01:20 compute-0 podman[231914]: 2026-01-05 21:01:20.004193846 +0000 UTC m=+0.132218214 container exec ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., distribution-scope=public, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, config_id=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=)
Jan 05 21:01:20 compute-0 podman[231914]: 2026-01-05 21:01:20.038417288 +0000 UTC m=+0.166441676 container exec_died ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, name=ubi9, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, container_name=kepler)
Jan 05 21:01:20 compute-0 systemd[1]: libpod-conmon-ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928.scope: Deactivated successfully.
Jan 05 21:01:20 compute-0 sudo[231911]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:20 compute-0 sudo[232110]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phefhhtiosioqdzltqwwoxldgsongmju ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646880.4526963-919-86564476354498/AnsiballZ_file.py'
Jan 05 21:01:21 compute-0 sudo[232110]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:21 compute-0 podman[232067]: 2026-01-05 21:01:21.10121342 +0000 UTC m=+0.222790454 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 05 21:01:21 compute-0 python3.9[232115]: ansible-ansible.builtin.file Invoked with group=0 mode=0700 owner=0 path=/var/lib/openstack/healthchecks/kepler recurse=True state=directory force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:21 compute-0 sudo[232110]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:22 compute-0 sudo[232271]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bxzaevxcowlvubdyqsfrdbkihcwamgfa ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646881.606627-928-764250841167/AnsiballZ_file.py'
Jan 05 21:01:22 compute-0 sudo[232271]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:22 compute-0 python3.9[232273]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall/ state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:22 compute-0 sudo[232271]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:23 compute-0 sudo[232423]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-unuprloxnudfohkumgtkbvypomokqgao ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646882.748044-936-255776613464411/AnsiballZ_stat.py'
Jan 05 21:01:23 compute-0 sudo[232423]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:23 compute-0 python3.9[232425]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/kepler.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 21:01:23 compute-0 sudo[232423]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:24 compute-0 sudo[232576]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-wakvgzqhimvkbgdujkkwrqcbxpaailpe ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646882.748044-936-255776613464411/AnsiballZ_copy.py'
Jan 05 21:01:24 compute-0 podman[232521]: 2026-01-05 21:01:24.411122733 +0000 UTC m=+0.126478256 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:01:24 compute-0 sudo[232576]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:24 compute-0 podman[232520]: 2026-01-05 21:01:24.425340143 +0000 UTC m=+0.144403062 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 21:01:24 compute-0 python3.9[232589]: ansible-ansible.legacy.copy Invoked with dest=/var/lib/edpm-config/firewall/kepler.yaml mode=0640 src=/home/zuul/.ansible/tmp/ansible-tmp-1767646882.748044-936-255776613464411/.source.yaml _original_basename=firewall.yaml follow=False checksum=40b8960d32c81de936cddbeb137a8240ecc54e7b backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:24 compute-0 sudo[232576]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:25 compute-0 sudo[232739]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dxfwziuzuprdyixokqpbtmncxemtknbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646885.0785987-952-240498340872382/AnsiballZ_file.py'
Jan 05 21:01:25 compute-0 sudo[232739]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:25 compute-0 python3.9[232741]: ansible-ansible.builtin.file Invoked with group=root mode=0750 owner=root path=/var/lib/edpm-config/firewall state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:25 compute-0 sudo[232739]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:26 compute-0 sudo[232891]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-acyzcdshodxmthujrbqwllrpykazmfey ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646886.2294257-960-4733226211276/AnsiballZ_stat.py'
Jan 05 21:01:26 compute-0 sudo[232891]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:26 compute-0 python3.9[232893]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 21:01:27 compute-0 sudo[232891]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:27 compute-0 sudo[232969]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-hlmpvzqxsjtwnzdpzlnaaqjjcmvywcgy ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646886.2294257-960-4733226211276/AnsiballZ_file.py'
Jan 05 21:01:27 compute-0 sudo[232969]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:27 compute-0 python3.9[232971]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml _original_basename=base-rules.yaml.j2 recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-base.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:27 compute-0 sudo[232969]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:28 compute-0 sudo[233121]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kdueaqqxvskzwrgedtwvgiafiwcmftzc ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646888.0429814-972-129958082701017/AnsiballZ_stat.py'
Jan 05 21:01:28 compute-0 sudo[233121]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:28 compute-0 python3.9[233123]: ansible-ansible.legacy.stat Invoked with path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 21:01:28 compute-0 sudo[233121]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:29 compute-0 sudo[233199]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dcdbeiateqdswepqfqsczfsumxbtoolm ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646888.0429814-972-129958082701017/AnsiballZ_file.py'
Jan 05 21:01:29 compute-0 sudo[233199]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:29 compute-0 python3.9[233201]: ansible-ansible.legacy.file Invoked with mode=0644 dest=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml _original_basename=.de45l_pu recurse=False state=file path=/var/lib/edpm-config/firewall/edpm-nftables-user-rules.yaml force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:29 compute-0 sudo[233199]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:29 compute-0 podman[202426]: time="2026-01-05T21:01:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:01:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:01:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27275 "" "Go-http-client/1.1"
Jan 05 21:01:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:01:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3849 "" "Go-http-client/1.1"
Jan 05 21:01:30 compute-0 sudo[233351]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mmgqpidzmmnytijxyqksxctwpxfwerdt ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646889.8583727-984-229213226770138/AnsiballZ_stat.py'
Jan 05 21:01:30 compute-0 sudo[233351]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:30 compute-0 python3.9[233353]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/iptables.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 21:01:30 compute-0 sudo[233351]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:31 compute-0 sudo[233429]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pyhozivmrlbahcyuoetrlyflcxbagetx ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646889.8583727-984-229213226770138/AnsiballZ_file.py'
Jan 05 21:01:31 compute-0 sudo[233429]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:31 compute-0 python3.9[233431]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/iptables.nft _original_basename=iptables.nft recurse=False state=file path=/etc/nftables/iptables.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:31 compute-0 sudo[233429]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:31 compute-0 openstack_network_exporter[205720]: ERROR   21:01:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:01:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:01:31 compute-0 openstack_network_exporter[205720]: ERROR   21:01:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:01:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:01:32 compute-0 sudo[233582]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-nreylztbpaoteqnezkumgmqhdpgicowv ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646891.6276822-997-56408111276273/AnsiballZ_command.py'
Jan 05 21:01:32 compute-0 sudo[233582]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:32 compute-0 python3.9[233584]: ansible-ansible.legacy.command Invoked with _raw_params=nft -j list ruleset _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 21:01:32 compute-0 sudo[233582]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:33 compute-0 podman[233709]: 2026-01-05 21:01:33.477964684 +0000 UTC m=+0.112090611 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:01:33 compute-0 sudo[233750]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-luztjhgdzwqlkepdnamwklqowhdrsouh ; /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646892.7541454-1005-55271423313051/AnsiballZ_edpm_nftables_from_files.py'
Jan 05 21:01:33 compute-0 sudo[233750]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:33 compute-0 python3[233759]: ansible-edpm_nftables_from_files Invoked with src=/var/lib/edpm-config/firewall
Jan 05 21:01:33 compute-0 sudo[233750]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:34 compute-0 sudo[233909]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fqwcwnxloszartufwwhjslgjgweifhcq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646894.0416312-1013-181298235467033/AnsiballZ_stat.py'
Jan 05 21:01:34 compute-0 sudo[233909]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:34 compute-0 python3.9[233911]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 21:01:34 compute-0 sudo[233909]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:35 compute-0 sudo[233987]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-mfrbqlxisdjbqsnguxxpsuhmdqqkohli ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646894.0416312-1013-181298235467033/AnsiballZ_file.py'
Jan 05 21:01:35 compute-0 sudo[233987]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:35 compute-0 python3.9[233989]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:35 compute-0 sudo[233987]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:36 compute-0 sudo[234139]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yilvsewkwvyxpmjikvqqczdjixewtrbb ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646895.9175467-1025-188564564772879/AnsiballZ_stat.py'
Jan 05 21:01:36 compute-0 sudo[234139]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:36 compute-0 python3.9[234141]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-update-jumps.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 21:01:36 compute-0 podman[234142]: 2026-01-05 21:01:36.793217286 +0000 UTC m=+0.135327206 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 05 21:01:36 compute-0 sudo[234139]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:37 compute-0 sudo[234237]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-tapolhkgkzratksflkvbxcybdlpxixle ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646895.9175467-1025-188564564772879/AnsiballZ_file.py'
Jan 05 21:01:37 compute-0 sudo[234237]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:37 compute-0 python3.9[234239]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-update-jumps.nft _original_basename=jump-chain.j2 recurse=False state=file path=/etc/nftables/edpm-update-jumps.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:37 compute-0 sudo[234237]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:38 compute-0 sudo[234389]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-affqjlmjtavtmmsoegnsgdkvupefqnbj ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646897.892726-1037-143133112172972/AnsiballZ_stat.py'
Jan 05 21:01:38 compute-0 sudo[234389]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:38 compute-0 podman[234391]: 2026-01-05 21:01:38.680189546 +0000 UTC m=+0.118124968 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, vendor=Red Hat, Inc., distribution-scope=public, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., io.openshift.expose-services=, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, container_name=kepler, managed_by=edpm_ansible, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 05 21:01:38 compute-0 python3.9[234392]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-flushes.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 21:01:38 compute-0 sudo[234389]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:39 compute-0 sudo[234486]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-pjjaseyviaoaqutuvpyffvkwuftzmzcz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646897.892726-1037-143133112172972/AnsiballZ_file.py'
Jan 05 21:01:39 compute-0 sudo[234486]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:39 compute-0 python3.9[234488]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-flushes.nft _original_basename=flush-chain.j2 recurse=False state=file path=/etc/nftables/edpm-flushes.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:39 compute-0 sudo[234486]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:40 compute-0 sudo[234638]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zbqhhwgcrilpibyorfdzlyvbiahlhfeh ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646899.7797134-1049-147175518058754/AnsiballZ_stat.py'
Jan 05 21:01:40 compute-0 sudo[234638]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:40 compute-0 python3.9[234640]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-chains.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 21:01:40 compute-0 sudo[234638]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:40 compute-0 podman[234643]: 2026-01-05 21:01:40.820958725 +0000 UTC m=+0.163355626 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 05 21:01:41 compute-0 sudo[234735]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yhitqusdaelumqepdcxtdhpofmrwujqs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646899.7797134-1049-147175518058754/AnsiballZ_file.py'
Jan 05 21:01:41 compute-0 sudo[234735]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:41 compute-0 python3.9[234737]: ansible-ansible.legacy.file Invoked with group=root mode=0600 owner=root dest=/etc/nftables/edpm-chains.nft _original_basename=chains.j2 recurse=False state=file path=/etc/nftables/edpm-chains.nft force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:41 compute-0 sudo[234735]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:42 compute-0 sudo[234887]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fmspveimbfxbynpqklcjigjjutbrszcu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646901.571777-1061-91755763469839/AnsiballZ_stat.py'
Jan 05 21:01:42 compute-0 sudo[234887]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:42 compute-0 python3.9[234889]: ansible-ansible.legacy.stat Invoked with path=/etc/nftables/edpm-rules.nft follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 21:01:42 compute-0 sudo[234887]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:01:42.829 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:01:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:01:42.830 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:01:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:01:42.830 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:01:43 compute-0 sudo[235012]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfwpdtmegzmtwoxzwopkfynjxvwciemf ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646901.571777-1061-91755763469839/AnsiballZ_copy.py'
Jan 05 21:01:43 compute-0 sudo[235012]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:43 compute-0 python3.9[235014]: ansible-ansible.legacy.copy Invoked with dest=/etc/nftables/edpm-rules.nft group=root mode=0600 owner=root src=/home/zuul/.ansible/tmp/ansible-tmp-1767646901.571777-1061-91755763469839/.source.nft follow=False _original_basename=ruleset.j2 checksum=b82fbd2c71bb7c36c630c2301913f0f42fd2e7ce backup=False force=True remote_src=False unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:43 compute-0 sudo[235012]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:44 compute-0 sudo[235164]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sssyhveivjctutvjkjybzfhqpssiobdg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646903.808127-1076-17337194262172/AnsiballZ_file.py'
Jan 05 21:01:44 compute-0 sudo[235164]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:44 compute-0 python3.9[235166]: ansible-ansible.builtin.file Invoked with group=root mode=0600 owner=root path=/etc/nftables/edpm-rules.nft.changed state=touch recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:44 compute-0 sudo[235164]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:45 compute-0 sudo[235316]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-emiqchrwosanlfuwrbbkncuwdcmkcfoz ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646904.9815524-1084-258613865569309/AnsiballZ_command.py'
Jan 05 21:01:45 compute-0 sudo[235316]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:45 compute-0 python3.9[235318]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-chains.nft /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft /etc/nftables/edpm-jumps.nft | nft -c -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 21:01:45 compute-0 sudo[235316]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:46 compute-0 sudo[235471]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-eyxhjyctrilgbevbgfjicgttetjwnpwg ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646906.1377535-1092-206844268076355/AnsiballZ_blockinfile.py'
Jan 05 21:01:46 compute-0 sudo[235471]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:47 compute-0 python3.9[235473]: ansible-ansible.builtin.blockinfile Invoked with backup=False block=include "/etc/nftables/iptables.nft"
                                             include "/etc/nftables/edpm-chains.nft"
                                             include "/etc/nftables/edpm-rules.nft"
                                             include "/etc/nftables/edpm-jumps.nft"
                                              path=/etc/sysconfig/nftables.conf validate=nft -c -f %s state=present marker=# {mark} ANSIBLE MANAGED BLOCK create=False marker_begin=BEGIN marker_end=END append_newline=False prepend_newline=False encoding=utf-8 unsafe_writes=False insertafter=None insertbefore=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:47 compute-0 sudo[235471]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:48 compute-0 sudo[235623]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-bhwomzwbzbxxihvzdgfetolfpnhfdxyo ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646907.506407-1101-68826380458209/AnsiballZ_command.py'
Jan 05 21:01:48 compute-0 sudo[235623]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:48 compute-0 python3.9[235625]: ansible-ansible.legacy.command Invoked with _raw_params=nft -f /etc/nftables/edpm-chains.nft _uses_shell=False expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 21:01:48 compute-0 sudo[235623]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:48 compute-0 podman[235670]: 2026-01-05 21:01:48.82862334 +0000 UTC m=+0.154969068 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41)
Jan 05 21:01:49 compute-0 sudo[235798]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-fidcmecqyvnxrewunethzklevhrwoxig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646908.6644647-1109-6967710874141/AnsiballZ_stat.py'
Jan 05 21:01:49 compute-0 sudo[235798]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:49 compute-0 python3.9[235800]: ansible-ansible.builtin.stat Invoked with path=/etc/nftables/edpm-rules.nft.changed follow=False get_checksum=True get_mime=True get_attributes=True get_selinux_context=False checksum_algorithm=sha1
Jan 05 21:01:49 compute-0 sudo[235798]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:50 compute-0 sudo[235953]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-qfgkecrnmfhxpdwyruryyewmykrnaezu ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646909.7627006-1117-203138515896108/AnsiballZ_command.py'
Jan 05 21:01:50 compute-0 sudo[235953]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:50 compute-0 python3.9[235955]: ansible-ansible.legacy.command Invoked with _raw_params=set -o pipefail; cat /etc/nftables/edpm-flushes.nft /etc/nftables/edpm-rules.nft /etc/nftables/edpm-update-jumps.nft | nft -f - _uses_shell=True expand_argument_vars=True stdin_add_newline=True strip_empty_ends=True cmd=None argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 21:01:50 compute-0 sudo[235953]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:51 compute-0 sudo[236123]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-iktgkcbrkmivkaysnlzgckowleikzxmn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646910.831462-1125-239274245635406/AnsiballZ_file.py'
Jan 05 21:01:51 compute-0 sudo[236123]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:01:51 compute-0 podman[236082]: 2026-01-05 21:01:51.499120037 +0000 UTC m=+0.196953111 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 21:01:51 compute-0 python3.9[236130]: ansible-ansible.builtin.file Invoked with path=/etc/nftables/edpm-rules.nft.changed state=absent recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None mode=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:01:51 compute-0 sudo[236123]: pam_unix(sudo:session): session closed for user root
Jan 05 21:01:52 compute-0 sshd-session[214564]: Connection closed by 192.168.122.30 port 55108
Jan 05 21:01:52 compute-0 sshd-session[214561]: pam_unix(sshd:session): session closed for user zuul
Jan 05 21:01:52 compute-0 systemd[1]: session-26.scope: Deactivated successfully.
Jan 05 21:01:52 compute-0 systemd[1]: session-26.scope: Consumed 2min 4.770s CPU time.
Jan 05 21:01:52 compute-0 systemd-logind[788]: Session 26 logged out. Waiting for processes to exit.
Jan 05 21:01:52 compute-0 systemd-logind[788]: Removed session 26.
Jan 05 21:01:54 compute-0 nova_compute[186018]: 2026-01-05 21:01:54.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:01:54 compute-0 nova_compute[186018]: 2026-01-05 21:01:54.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:01:54 compute-0 nova_compute[186018]: 2026-01-05 21:01:54.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:01:54 compute-0 nova_compute[186018]: 2026-01-05 21:01:54.484 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 21:01:54 compute-0 nova_compute[186018]: 2026-01-05 21:01:54.484 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:01:54 compute-0 nova_compute[186018]: 2026-01-05 21:01:54.485 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:01:54 compute-0 podman[236161]: 2026-01-05 21:01:54.768147796 +0000 UTC m=+0.109109542 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 05 21:01:54 compute-0 podman[236162]: 2026-01-05 21:01:54.786688479 +0000 UTC m=+0.113902657 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:01:55 compute-0 nova_compute[186018]: 2026-01-05 21:01:55.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:01:55 compute-0 nova_compute[186018]: 2026-01-05 21:01:55.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:01:55 compute-0 nova_compute[186018]: 2026-01-05 21:01:55.505 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:01:55 compute-0 nova_compute[186018]: 2026-01-05 21:01:55.505 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:01:55 compute-0 nova_compute[186018]: 2026-01-05 21:01:55.506 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:01:55 compute-0 nova_compute[186018]: 2026-01-05 21:01:55.506 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:01:55 compute-0 nova_compute[186018]: 2026-01-05 21:01:55.982 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:01:55 compute-0 nova_compute[186018]: 2026-01-05 21:01:55.983 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5680MB free_disk=72.47970962524414GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:01:55 compute-0 nova_compute[186018]: 2026-01-05 21:01:55.984 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:01:55 compute-0 nova_compute[186018]: 2026-01-05 21:01:55.984 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:01:56 compute-0 nova_compute[186018]: 2026-01-05 21:01:56.056 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:01:56 compute-0 nova_compute[186018]: 2026-01-05 21:01:56.057 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:01:56 compute-0 nova_compute[186018]: 2026-01-05 21:01:56.091 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:01:56 compute-0 nova_compute[186018]: 2026-01-05 21:01:56.107 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:01:56 compute-0 nova_compute[186018]: 2026-01-05 21:01:56.108 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:01:56 compute-0 nova_compute[186018]: 2026-01-05 21:01:56.109 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.125s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:01:57 compute-0 nova_compute[186018]: 2026-01-05 21:01:57.109 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:01:57 compute-0 nova_compute[186018]: 2026-01-05 21:01:57.110 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:01:57 compute-0 nova_compute[186018]: 2026-01-05 21:01:57.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:01:57 compute-0 sshd-session[236201]: Accepted publickey for zuul from 192.168.122.30 port 53766 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 21:01:57 compute-0 systemd-logind[788]: New session 27 of user zuul.
Jan 05 21:01:57 compute-0 systemd[1]: Started Session 27 of User zuul.
Jan 05 21:01:57 compute-0 sshd-session[236201]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 21:01:58 compute-0 nova_compute[186018]: 2026-01-05 21:01:58.456 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:01:59 compute-0 python3.9[236354]: ansible-ansible.builtin.setup Invoked with gather_subset=['!all', '!min', 'local'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 21:01:59 compute-0 nova_compute[186018]: 2026-01-05 21:01:59.459 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:01:59 compute-0 podman[202426]: time="2026-01-05T21:01:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:01:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:01:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:01:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:01:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3845 "" "Go-http-client/1.1"
Jan 05 21:02:00 compute-0 nova_compute[186018]: 2026-01-05 21:02:00.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:02:00 compute-0 sudo[236508]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-sgvkxmszmycwgcmdymlesigcykujfsln ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646919.9600785-34-223885694731704/AnsiballZ_systemd.py'
Jan 05 21:02:00 compute-0 sudo[236508]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:02:01 compute-0 python3.9[236510]: ansible-ansible.builtin.systemd Invoked with name=rsyslog daemon_reload=False daemon_reexec=False scope=system no_block=False state=None enabled=None force=None masked=None
Jan 05 21:02:01 compute-0 sudo[236508]: pam_unix(sudo:session): session closed for user root
Jan 05 21:02:01 compute-0 openstack_network_exporter[205720]: ERROR   21:02:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:02:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:02:01 compute-0 openstack_network_exporter[205720]: ERROR   21:02:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:02:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:02:02 compute-0 sudo[236661]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-regbbtlpxhxifqifvyaeafduwchbdcig ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646921.4925563-42-164701788103345/AnsiballZ_setup.py'
Jan 05 21:02:02 compute-0 sudo[236661]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:02:02 compute-0 python3.9[236663]: ansible-ansible.legacy.setup Invoked with filter=['ansible_pkg_mgr'] gather_subset=['!all'] gather_timeout=10 fact_path=/etc/ansible/facts.d
Jan 05 21:02:02 compute-0 sudo[236661]: pam_unix(sudo:session): session closed for user root
Jan 05 21:02:03 compute-0 sudo[236745]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-elzxgrdkbvqtbhclppvbwnjqhwbylyes ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646921.4925563-42-164701788103345/AnsiballZ_dnf.py'
Jan 05 21:02:03 compute-0 sudo[236745]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:02:03 compute-0 python3.9[236747]: ansible-ansible.legacy.dnf Invoked with name=['rsyslog-openssl'] state=present allow_downgrade=False allowerasing=False autoremove=False bugfix=False cacheonly=False disable_gpg_check=False disable_plugin=[] disablerepo=[] download_only=False enable_plugin=[] enablerepo=[] exclude=[] installroot=/ install_weak_deps=True security=False skip_broken=False update_cache=False update_only=False validate_certs=True sslverify=True lock_timeout=30 use_backend=auto best=None conf_file=None disable_excludes=None download_dir=None list=None nobest=None releasever=None
Jan 05 21:02:03 compute-0 podman[236748]: 2026-01-05 21:02:03.787491883 +0000 UTC m=+0.126939755 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:02:06 compute-0 sudo[236745]: pam_unix(sudo:session): session closed for user root
Jan 05 21:02:07 compute-0 sudo[236941]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dkxqsjqvtiqmtelpfccsczkgwfqnyivs ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646926.6721904-54-56635680741890/AnsiballZ_stat.py'
Jan 05 21:02:07 compute-0 sudo[236941]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:02:07 compute-0 podman[236901]: 2026-01-05 21:02:07.423711703 +0000 UTC m=+0.143142937 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:02:07 compute-0 python3.9[236947]: ansible-ansible.legacy.stat Invoked with path=/etc/pki/rsyslog/ca-openshift.crt follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 21:02:07 compute-0 sudo[236941]: pam_unix(sudo:session): session closed for user root
Jan 05 21:02:08 compute-0 sudo[237069]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-cptwbxmmhycfotacpugexogklnssfaze ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646926.6721904-54-56635680741890/AnsiballZ_copy.py'
Jan 05 21:02:08 compute-0 sudo[237069]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:02:08 compute-0 python3.9[237071]: ansible-ansible.legacy.copy Invoked with dest=/etc/pki/rsyslog/ca-openshift.crt mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767646926.6721904-54-56635680741890/.source.crt _original_basename=ca-openshift.crt follow=False checksum=1d88bab26da5c85710a770c705f3555781bf2a38 backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:02:08 compute-0 sudo[237069]: pam_unix(sudo:session): session closed for user root
Jan 05 21:02:09 compute-0 podman[237195]: 2026-01-05 21:02:09.652593486 +0000 UTC m=+0.116150595 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, vcs-type=git, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 05 21:02:09 compute-0 sudo[237238]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-izfdamvstrjzptzipykzludirkuakwan ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646928.9753728-69-28529087977183/AnsiballZ_file.py'
Jan 05 21:02:09 compute-0 sudo[237238]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:02:09 compute-0 python3.9[237243]: ansible-ansible.builtin.file Invoked with mode=0755 path=/etc/rsyslog.d state=directory recurse=False force=False follow=True modification_time_format=%Y%m%d%H%M.%S access_time_format=%Y%m%d%H%M.%S unsafe_writes=False _original_basename=None _diff_peek=None src=None modification_time=None access_time=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:02:09 compute-0 sudo[237238]: pam_unix(sudo:session): session closed for user root
Jan 05 21:02:10 compute-0 sudo[237393]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-feulrhjosmcpybrmvauisscxwnnpyfnq ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646930.1494727-77-30805733915394/AnsiballZ_stat.py'
Jan 05 21:02:10 compute-0 sudo[237393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:02:10 compute-0 python3.9[237395]: ansible-ansible.legacy.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_checksum=True get_size=False checksum_algorithm=sha1 get_mime=True get_attributes=True get_selinux_context=False
Jan 05 21:02:10 compute-0 sudo[237393]: pam_unix(sudo:session): session closed for user root
Jan 05 21:02:11 compute-0 sudo[237531]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-trmtewpszeolrzhbyyyszbodicjqtkhn ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646930.1494727-77-30805733915394/AnsiballZ_copy.py'
Jan 05 21:02:11 compute-0 sudo[237531]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:02:11 compute-0 podman[237490]: 2026-01-05 21:02:11.625388644 +0000 UTC m=+0.144999166 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Jan 05 21:02:11 compute-0 python3.9[237537]: ansible-ansible.legacy.copy Invoked with dest=/etc/rsyslog.d/10-telemetry.conf mode=0644 remote_src=False src=/home/zuul/.ansible/tmp/ansible-tmp-1767646930.1494727-77-30805733915394/.source.conf _original_basename=10-telemetry.conf follow=False checksum=76865d9dd4bf9cd322a47065c046bcac194645ab backup=False force=True unsafe_writes=False content=NOT_LOGGING_PARAMETER validate=None directory_mode=None local_follow=None owner=None group=None seuser=None serole=None selevel=None setype=None attributes=None
Jan 05 21:02:11 compute-0 sudo[237531]: pam_unix(sudo:session): session closed for user root
Jan 05 21:02:12 compute-0 sudo[237689]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-kyohsuxoqtvsujkaekqkiyhpijokpldk ; /usr/bin/python3.9 /home/zuul/.ansible/tmp/ansible-tmp-1767646932.1217015-92-215578827102910/AnsiballZ_systemd.py'
Jan 05 21:02:12 compute-0 sudo[237689]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:02:12 compute-0 python3.9[237691]: ansible-ansible.builtin.systemd Invoked with name=rsyslog.service state=restarted daemon_reload=False daemon_reexec=False scope=system no_block=False enabled=None force=None masked=None
Jan 05 21:02:13 compute-0 systemd[1]: Stopping System Logging Service...
Jan 05 21:02:13 compute-0 rsyslogd[1006]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="1006" x-info="https://www.rsyslog.com"] exiting on signal 15.
Jan 05 21:02:13 compute-0 systemd[1]: rsyslog.service: Deactivated successfully.
Jan 05 21:02:13 compute-0 systemd[1]: Stopped System Logging Service.
Jan 05 21:02:13 compute-0 systemd[1]: rsyslog.service: Consumed 4.692s CPU time, 8.0M memory peak, read 0B from disk, written 6.4M to disk.
Jan 05 21:02:13 compute-0 systemd[1]: Starting System Logging Service...
Jan 05 21:02:13 compute-0 rsyslogd[237695]: [origin software="rsyslogd" swVersion="8.2510.0-2.el9" x-pid="237695" x-info="https://www.rsyslog.com"] start
Jan 05 21:02:13 compute-0 systemd[1]: Started System Logging Service.
Jan 05 21:02:13 compute-0 rsyslogd[237695]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 21:02:13 compute-0 rsyslogd[237695]: Warning: Certificate file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2330 ]
Jan 05 21:02:13 compute-0 rsyslogd[237695]: Warning: Key file is not set [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2331 ]
Jan 05 21:02:13 compute-0 rsyslogd[237695]: nsd_ossl: TLS Connection initiated with remote syslog server '172.17.0.80'. [v8.2510.0-2.el9]
Jan 05 21:02:13 compute-0 sudo[237689]: pam_unix(sudo:session): session closed for user root
Jan 05 21:02:13 compute-0 rsyslogd[237695]: nsd_ossl: Information, no shared curve between syslog client '172.17.0.80' and server [v8.2510.0-2.el9]
Jan 05 21:02:14 compute-0 sshd-session[236204]: Connection closed by 192.168.122.30 port 53766
Jan 05 21:02:14 compute-0 sshd-session[236201]: pam_unix(sshd:session): session closed for user zuul
Jan 05 21:02:14 compute-0 systemd[1]: session-27.scope: Deactivated successfully.
Jan 05 21:02:14 compute-0 systemd[1]: session-27.scope: Consumed 12.872s CPU time.
Jan 05 21:02:14 compute-0 systemd-logind[788]: Session 27 logged out. Waiting for processes to exit.
Jan 05 21:02:14 compute-0 systemd-logind[788]: Removed session 27.
Jan 05 21:02:19 compute-0 podman[237725]: 2026-01-05 21:02:19.751661361 +0000 UTC m=+0.083209808 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., container_name=openstack_network_exporter, distribution-scope=public, release=1755695350, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, vcs-type=git, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 05 21:02:21 compute-0 podman[237745]: 2026-01-05 21:02:21.844958124 +0000 UTC m=+0.183441356 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 05 21:02:25 compute-0 podman[237772]: 2026-01-05 21:02:25.754560071 +0000 UTC m=+0.095587460 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:02:25 compute-0 podman[237771]: 2026-01-05 21:02:25.76605301 +0000 UTC m=+0.105721013 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 05 21:02:29 compute-0 podman[202426]: time="2026-01-05T21:02:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:02:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:02:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:02:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:02:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3834 "" "Go-http-client/1.1"
Jan 05 21:02:31 compute-0 openstack_network_exporter[205720]: ERROR   21:02:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:02:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:02:31 compute-0 openstack_network_exporter[205720]: ERROR   21:02:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:02:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:02:34 compute-0 podman[237812]: 2026-01-05 21:02:34.760774244 +0000 UTC m=+0.101179795 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:02:37 compute-0 podman[237836]: 2026-01-05 21:02:37.812032935 +0000 UTC m=+0.148271451 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 05 21:02:40 compute-0 podman[237855]: 2026-01-05 21:02:40.786824817 +0000 UTC m=+0.125888219 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, release-0.7.12=, vcs-type=git, vendor=Red Hat, Inc., config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, distribution-scope=public)
Jan 05 21:02:42 compute-0 podman[237874]: 2026-01-05 21:02:42.725151787 +0000 UTC m=+0.067533519 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 05 21:02:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:02:42.831 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:02:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:02:42.831 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:02:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:02:42.831 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:02:50 compute-0 sshd-session[237895]: Accepted publickey for zuul from 38.102.83.164 port 57984 ssh2: RSA SHA256:mXJcJI31MVGiY6AzcXJ/p7r5TKU3Hv0WPE1JL6YqbII
Jan 05 21:02:50 compute-0 systemd-logind[788]: New session 28 of user zuul.
Jan 05 21:02:50 compute-0 systemd[1]: Started Session 28 of User zuul.
Jan 05 21:02:50 compute-0 sshd-session[237895]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 21:02:50 compute-0 podman[237897]: 2026-01-05 21:02:50.210377317 +0000 UTC m=+0.098531656 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.6, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, vendor=Red Hat, Inc.)
Jan 05 21:02:51 compute-0 python3[238092]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 21:02:52 compute-0 podman[238165]: 2026-01-05 21:02:52.831114281 +0000 UTC m=+0.184299209 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 05 21:02:53 compute-0 sudo[238338]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-phmdmrwfzijeiygopepevitstrarzgvj ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646973.013-37297-259130261452302/AnsiballZ_command.py'
Jan 05 21:02:53 compute-0 sudo[238338]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:02:53 compute-0 python3[238340]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "ceilometer_agent_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 21:02:53 compute-0 sudo[238338]: pam_unix(sudo:session): session closed for user root
Jan 05 21:02:54 compute-0 sudo[238491]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ikqnsvfkimhcuizrkcpnwnotejjxahwi ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646974.313372-37308-224659433381409/AnsiballZ_command.py'
Jan 05 21:02:54 compute-0 sudo[238491]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:02:54 compute-0 python3[238493]: ansible-ansible.legacy.command Invoked with _raw_params=tstamp=$(date -d '30 minute ago' "+%Y-%m-%d %H:%M:%S")
                                           journalctl -t "nova_compute" --no-pager -S "${tstamp}"
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 21:02:55 compute-0 nova_compute[186018]: 2026-01-05 21:02:55.458 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:02:55 compute-0 nova_compute[186018]: 2026-01-05 21:02:55.459 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:02:55 compute-0 nova_compute[186018]: 2026-01-05 21:02:55.459 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:02:56 compute-0 sudo[238491]: pam_unix(sudo:session): session closed for user root
Jan 05 21:02:56 compute-0 nova_compute[186018]: 2026-01-05 21:02:56.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:02:56 compute-0 nova_compute[186018]: 2026-01-05 21:02:56.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:02:56 compute-0 nova_compute[186018]: 2026-01-05 21:02:56.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:02:56 compute-0 nova_compute[186018]: 2026-01-05 21:02:56.477 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 21:02:56 compute-0 nova_compute[186018]: 2026-01-05 21:02:56.477 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:02:56 compute-0 podman[238521]: 2026-01-05 21:02:56.748605753 +0000 UTC m=+0.087056178 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:02:56 compute-0 podman[238520]: 2026-01-05 21:02:56.799469807 +0000 UTC m=+0.128646710 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 05 21:02:57 compute-0 nova_compute[186018]: 2026-01-05 21:02:57.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:02:57 compute-0 nova_compute[186018]: 2026-01-05 21:02:57.496 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:02:57 compute-0 nova_compute[186018]: 2026-01-05 21:02:57.498 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:02:57 compute-0 nova_compute[186018]: 2026-01-05 21:02:57.498 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:02:57 compute-0 nova_compute[186018]: 2026-01-05 21:02:57.499 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:02:57 compute-0 python3[238684]: ansible-ansible.builtin.stat Invoked with path=/etc/rsyslog.d/10-telemetry.conf follow=False get_md5=False get_checksum=True get_mime=True get_attributes=True checksum_algorithm=sha1
Jan 05 21:02:58 compute-0 nova_compute[186018]: 2026-01-05 21:02:57.998 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:02:58 compute-0 nova_compute[186018]: 2026-01-05 21:02:58.000 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5721MB free_disk=72.47669219970703GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:02:58 compute-0 nova_compute[186018]: 2026-01-05 21:02:58.001 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:02:58 compute-0 nova_compute[186018]: 2026-01-05 21:02:58.001 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:02:58 compute-0 nova_compute[186018]: 2026-01-05 21:02:58.084 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:02:58 compute-0 nova_compute[186018]: 2026-01-05 21:02:58.085 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:02:58 compute-0 nova_compute[186018]: 2026-01-05 21:02:58.137 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:02:58 compute-0 nova_compute[186018]: 2026-01-05 21:02:58.159 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:02:58 compute-0 nova_compute[186018]: 2026-01-05 21:02:58.162 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:02:58 compute-0 nova_compute[186018]: 2026-01-05 21:02:58.163 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.161s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:02:59 compute-0 sudo[238835]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-dyptywidtudpqmbrlteqebsvblcdjzbk ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646978.5370662-37353-9896340801153/AnsiballZ_setup.py'
Jan 05 21:02:59 compute-0 sudo[238835]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:02:59 compute-0 nova_compute[186018]: 2026-01-05 21:02:59.164 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:02:59 compute-0 nova_compute[186018]: 2026-01-05 21:02:59.165 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:02:59 compute-0 python3[238837]: ansible-ansible.legacy.setup Invoked with gather_subset=['all'] gather_timeout=10 filter=[] fact_path=/etc/ansible/facts.d
Jan 05 21:02:59 compute-0 podman[202426]: time="2026-01-05T21:02:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:02:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:02:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:02:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:02:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3848 "" "Go-http-client/1.1"
Jan 05 21:03:00 compute-0 sudo[238835]: pam_unix(sudo:session): session closed for user root
Jan 05 21:03:01 compute-0 openstack_network_exporter[205720]: ERROR   21:03:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:03:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:03:01 compute-0 openstack_network_exporter[205720]: ERROR   21:03:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:03:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:03:01 compute-0 nova_compute[186018]: 2026-01-05 21:03:01.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:03:01 compute-0 nova_compute[186018]: 2026-01-05 21:03:01.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:03:01 compute-0 sudo[239060]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-jvrrqqdxzrkcdyuhaufbuowpjetylnpx ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646981.2458794-37382-231952712453976/AnsiballZ_command.py'
Jan 05 21:03:01 compute-0 sudo[239060]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:03:01 compute-0 python3[239062]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 21:03:02 compute-0 sudo[239060]: pam_unix(sudo:session): session closed for user root
Jan 05 21:03:03 compute-0 sudo[239225]:     zuul : TTY=pts/0 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-umnaykbndlcilxzeudmxzrnprytjfmdu ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767646982.5657341-37399-186917086364613/AnsiballZ_command.py'
Jan 05 21:03:03 compute-0 sudo[239225]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:03:03 compute-0 python3[239227]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 21:03:03 compute-0 sudo[239225]: pam_unix(sudo:session): session closed for user root
Jan 05 21:03:05 compute-0 podman[239267]: 2026-01-05 21:03:05.786171042 +0000 UTC m=+0.122079770 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.775 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.777 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.787 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'network.incoming.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': [], 'network.outgoing.packets.error': [], 'disk.device.allocation': [], 'network.incoming.bytes.delta': [], 'network.incoming.bytes.rate': [], 'network.incoming.packets.drop': [], 'memory.usage': [], 'power.state': [], 'disk.device.capacity': [], 'disk.device.read.bytes': [], 'network.incoming.bytes': [], 'disk.device.read.latency': [], 'disk.device.read.requests': [], 'disk.device.usage': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.804 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:03:07.805 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:03:08 compute-0 podman[239294]: 2026-01-05 21:03:08.794085335 +0000 UTC m=+0.128890986 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 05 21:03:11 compute-0 podman[239313]: 2026-01-05 21:03:11.796483295 +0000 UTC m=+0.131825273 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.component=ubi9-container, release=1214.1726694543, release-0.7.12=, vcs-type=git, architecture=x86_64, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0)
Jan 05 21:03:13 compute-0 podman[239332]: 2026-01-05 21:03:13.772588877 +0000 UTC m=+0.119925674 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.build-date=20251224, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Jan 05 21:03:20 compute-0 podman[239352]: 2026-01-05 21:03:20.783369025 +0000 UTC m=+0.124374229 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, io.buildah.version=1.33.7, name=ubi9-minimal, io.openshift.expose-services=, release=1755695350, vendor=Red Hat, Inc., container_name=openstack_network_exporter, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 05 21:03:23 compute-0 podman[239372]: 2026-01-05 21:03:23.844558555 +0000 UTC m=+0.178390015 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:03:27 compute-0 podman[239398]: 2026-01-05 21:03:27.739041758 +0000 UTC m=+0.089019318 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:03:27 compute-0 podman[239397]: 2026-01-05 21:03:27.757989181 +0000 UTC m=+0.102412437 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 05 21:03:29 compute-0 podman[202426]: time="2026-01-05T21:03:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:03:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:03:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:03:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:03:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3846 "" "Go-http-client/1.1"
Jan 05 21:03:31 compute-0 openstack_network_exporter[205720]: ERROR   21:03:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:03:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:03:31 compute-0 openstack_network_exporter[205720]: ERROR   21:03:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:03:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:03:36 compute-0 podman[239435]: 2026-01-05 21:03:36.786003422 +0000 UTC m=+0.131356520 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:03:39 compute-0 podman[239458]: 2026-01-05 21:03:39.715562005 +0000 UTC m=+0.066639806 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi)
Jan 05 21:03:42 compute-0 podman[239478]: 2026-01-05 21:03:42.756980951 +0000 UTC m=+0.105451547 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, release-0.7.12=, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, version=9.4, io.buildah.version=1.29.0, managed_by=edpm_ansible, container_name=kepler, vcs-type=git, io.openshift.expose-services=, architecture=x86_64, build-date=2024-09-18T21:23:30, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 05 21:03:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:03:42.832 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:03:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:03:42.832 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:03:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:03:42.832 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:03:44 compute-0 podman[239495]: 2026-01-05 21:03:44.828340444 +0000 UTC m=+0.159757790 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 05 21:03:51 compute-0 podman[239515]: 2026-01-05 21:03:51.727211298 +0000 UTC m=+0.083215957 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., release=1755695350, name=ubi9-minimal, architecture=x86_64, config_id=openstack_network_exporter, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 05 21:03:54 compute-0 podman[239536]: 2026-01-05 21:03:54.780102053 +0000 UTC m=+0.134978245 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 05 21:03:55 compute-0 nova_compute[186018]: 2026-01-05 21:03:55.457 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:03:55 compute-0 nova_compute[186018]: 2026-01-05 21:03:55.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:03:55 compute-0 nova_compute[186018]: 2026-01-05 21:03:55.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:03:57 compute-0 nova_compute[186018]: 2026-01-05 21:03:57.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:03:58 compute-0 nova_compute[186018]: 2026-01-05 21:03:58.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:03:58 compute-0 nova_compute[186018]: 2026-01-05 21:03:58.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:03:58 compute-0 nova_compute[186018]: 2026-01-05 21:03:58.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:03:58 compute-0 nova_compute[186018]: 2026-01-05 21:03:58.479 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 21:03:58 compute-0 nova_compute[186018]: 2026-01-05 21:03:58.479 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:03:58 compute-0 nova_compute[186018]: 2026-01-05 21:03:58.480 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:03:58 compute-0 nova_compute[186018]: 2026-01-05 21:03:58.480 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:03:58 compute-0 nova_compute[186018]: 2026-01-05 21:03:58.714 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:03:58 compute-0 nova_compute[186018]: 2026-01-05 21:03:58.714 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:03:58 compute-0 nova_compute[186018]: 2026-01-05 21:03:58.714 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:03:58 compute-0 nova_compute[186018]: 2026-01-05 21:03:58.715 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:03:58 compute-0 podman[239563]: 2026-01-05 21:03:58.790682938 +0000 UTC m=+0.134280026 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 05 21:03:58 compute-0 podman[239564]: 2026-01-05 21:03:58.799100277 +0000 UTC m=+0.128768113 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:03:59 compute-0 nova_compute[186018]: 2026-01-05 21:03:59.300 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:03:59 compute-0 nova_compute[186018]: 2026-01-05 21:03:59.303 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5691MB free_disk=72.47684860229492GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:03:59 compute-0 nova_compute[186018]: 2026-01-05 21:03:59.303 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:03:59 compute-0 nova_compute[186018]: 2026-01-05 21:03:59.304 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:03:59 compute-0 nova_compute[186018]: 2026-01-05 21:03:59.383 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:03:59 compute-0 nova_compute[186018]: 2026-01-05 21:03:59.384 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:03:59 compute-0 nova_compute[186018]: 2026-01-05 21:03:59.417 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:03:59 compute-0 nova_compute[186018]: 2026-01-05 21:03:59.432 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:03:59 compute-0 nova_compute[186018]: 2026-01-05 21:03:59.434 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:03:59 compute-0 nova_compute[186018]: 2026-01-05 21:03:59.435 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.131s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:03:59 compute-0 podman[202426]: time="2026-01-05T21:03:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:03:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:03:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:03:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:03:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3853 "" "Go-http-client/1.1"
Jan 05 21:04:01 compute-0 openstack_network_exporter[205720]: ERROR   21:04:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:04:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:04:01 compute-0 openstack_network_exporter[205720]: ERROR   21:04:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:04:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:04:03 compute-0 nova_compute[186018]: 2026-01-05 21:04:03.416 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:04:03 compute-0 sshd-session[237908]: Received disconnect from 38.102.83.164 port 57984:11: disconnected by user
Jan 05 21:04:03 compute-0 sshd-session[237908]: Disconnected from user zuul 38.102.83.164 port 57984
Jan 05 21:04:03 compute-0 sshd-session[237895]: pam_unix(sshd:session): session closed for user zuul
Jan 05 21:04:03 compute-0 systemd[1]: session-28.scope: Deactivated successfully.
Jan 05 21:04:03 compute-0 systemd[1]: session-28.scope: Consumed 11.283s CPU time.
Jan 05 21:04:03 compute-0 nova_compute[186018]: 2026-01-05 21:04:03.456 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:04:03 compute-0 systemd-logind[788]: Session 28 logged out. Waiting for processes to exit.
Jan 05 21:04:03 compute-0 systemd-logind[788]: Removed session 28.
Jan 05 21:04:03 compute-0 nova_compute[186018]: 2026-01-05 21:04:03.482 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:04:07 compute-0 podman[239604]: 2026-01-05 21:04:07.759883863 +0000 UTC m=+0.097887659 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:04:10 compute-0 podman[239626]: 2026-01-05 21:04:10.761618174 +0000 UTC m=+0.107999273 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 05 21:04:13 compute-0 podman[239646]: 2026-01-05 21:04:13.784154518 +0000 UTC m=+0.125349626 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, build-date=2024-09-18T21:23:30, release=1214.1726694543, container_name=kepler, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, vcs-type=git, managed_by=edpm_ansible, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 05 21:04:15 compute-0 podman[239664]: 2026-01-05 21:04:15.769956209 +0000 UTC m=+0.117594904 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224)
Jan 05 21:04:22 compute-0 podman[239684]: 2026-01-05 21:04:22.790896042 +0000 UTC m=+0.129554555 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.expose-services=, vcs-type=git, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., name=ubi9-minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, architecture=x86_64, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350)
Jan 05 21:04:25 compute-0 podman[239705]: 2026-01-05 21:04:25.824590419 +0000 UTC m=+0.173845393 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_controller)
Jan 05 21:04:29 compute-0 podman[239730]: 2026-01-05 21:04:29.741299564 +0000 UTC m=+0.090745462 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 21:04:29 compute-0 podman[202426]: time="2026-01-05T21:04:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:04:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:04:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:04:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:04:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3854 "" "Go-http-client/1.1"
Jan 05 21:04:29 compute-0 podman[239731]: 2026-01-05 21:04:29.7691092 +0000 UTC m=+0.118580148 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:04:31 compute-0 openstack_network_exporter[205720]: ERROR   21:04:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:04:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:04:31 compute-0 openstack_network_exporter[205720]: ERROR   21:04:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:04:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:04:38 compute-0 podman[239770]: 2026-01-05 21:04:38.792372727 +0000 UTC m=+0.140928873 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 21:04:41 compute-0 podman[239794]: 2026-01-05 21:04:41.790667 +0000 UTC m=+0.128963090 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 05 21:04:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:04:42.834 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:04:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:04:42.835 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:04:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:04:42.835 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:04:44 compute-0 podman[239813]: 2026-01-05 21:04:44.782072271 +0000 UTC m=+0.121079212 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, distribution-scope=public, release=1214.1726694543, release-0.7.12=, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, name=ubi9, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4)
Jan 05 21:04:46 compute-0 podman[239832]: 2026-01-05 21:04:46.769646187 +0000 UTC m=+0.098551265 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 05 21:04:53 compute-0 podman[239852]: 2026-01-05 21:04:53.798283912 +0000 UTC m=+0.134282309 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vendor=Red Hat, Inc., name=ubi9-minimal, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 05 21:04:55 compute-0 nova_compute[186018]: 2026-01-05 21:04:55.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:04:55 compute-0 nova_compute[186018]: 2026-01-05 21:04:55.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:04:55 compute-0 nova_compute[186018]: 2026-01-05 21:04:55.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:04:56 compute-0 podman[239871]: 2026-01-05 21:04:56.852382312 +0000 UTC m=+0.193245210 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 05 21:04:57 compute-0 nova_compute[186018]: 2026-01-05 21:04:57.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:04:58 compute-0 nova_compute[186018]: 2026-01-05 21:04:58.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:04:58 compute-0 nova_compute[186018]: 2026-01-05 21:04:58.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:04:58 compute-0 nova_compute[186018]: 2026-01-05 21:04:58.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:04:58 compute-0 nova_compute[186018]: 2026-01-05 21:04:58.484 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 21:04:58 compute-0 nova_compute[186018]: 2026-01-05 21:04:58.485 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:04:58 compute-0 nova_compute[186018]: 2026-01-05 21:04:58.517 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:04:58 compute-0 nova_compute[186018]: 2026-01-05 21:04:58.518 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:04:58 compute-0 nova_compute[186018]: 2026-01-05 21:04:58.519 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:04:58 compute-0 nova_compute[186018]: 2026-01-05 21:04:58.520 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:04:58 compute-0 nova_compute[186018]: 2026-01-05 21:04:58.940 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:04:58 compute-0 nova_compute[186018]: 2026-01-05 21:04:58.941 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5701MB free_disk=72.4768295288086GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:04:58 compute-0 nova_compute[186018]: 2026-01-05 21:04:58.941 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:04:58 compute-0 nova_compute[186018]: 2026-01-05 21:04:58.941 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:04:59 compute-0 nova_compute[186018]: 2026-01-05 21:04:59.018 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:04:59 compute-0 nova_compute[186018]: 2026-01-05 21:04:59.019 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:04:59 compute-0 nova_compute[186018]: 2026-01-05 21:04:59.066 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:04:59 compute-0 nova_compute[186018]: 2026-01-05 21:04:59.083 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:04:59 compute-0 nova_compute[186018]: 2026-01-05 21:04:59.086 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:04:59 compute-0 nova_compute[186018]: 2026-01-05 21:04:59.087 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.146s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:04:59 compute-0 podman[202426]: time="2026-01-05T21:04:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:04:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:04:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:04:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:04:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3854 "" "Go-http-client/1.1"
Jan 05 21:05:00 compute-0 podman[239898]: 2026-01-05 21:05:00.789617023 +0000 UTC m=+0.126747402 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 05 21:05:00 compute-0 podman[239899]: 2026-01-05 21:05:00.812796619 +0000 UTC m=+0.138212382 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:05:01 compute-0 nova_compute[186018]: 2026-01-05 21:05:01.062 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:05:01 compute-0 nova_compute[186018]: 2026-01-05 21:05:01.063 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:05:01 compute-0 openstack_network_exporter[205720]: ERROR   21:05:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:05:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:05:01 compute-0 openstack_network_exporter[205720]: ERROR   21:05:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:05:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:05:03 compute-0 nova_compute[186018]: 2026-01-05 21:05:03.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:05:04 compute-0 nova_compute[186018]: 2026-01-05 21:05:04.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.776 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.777 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.777 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.778 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.781 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.783 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.785 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.786 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.788 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.789 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.796 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:05:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:05:09 compute-0 podman[239941]: 2026-01-05 21:05:09.756842205 +0000 UTC m=+0.106684638 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:05:12 compute-0 podman[239965]: 2026-01-05 21:05:12.769599954 +0000 UTC m=+0.106929824 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Jan 05 21:05:15 compute-0 podman[239985]: 2026-01-05 21:05:15.809766649 +0000 UTC m=+0.150065141 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, distribution-scope=public, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, maintainer=Red Hat, Inc., io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 05 21:05:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:05:16.393 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=2, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=1) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:05:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:05:16.394 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:05:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:05:16.395 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '2'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:05:17 compute-0 podman[240005]: 2026-01-05 21:05:17.774452028 +0000 UTC m=+0.120256752 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 21:05:24 compute-0 podman[240027]: 2026-01-05 21:05:24.812832898 +0000 UTC m=+0.152450563 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, io.openshift.expose-services=, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41)
Jan 05 21:05:27 compute-0 podman[240047]: 2026-01-05 21:05:27.855748076 +0000 UTC m=+0.194864192 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 05 21:05:29 compute-0 podman[202426]: time="2026-01-05T21:05:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:05:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:05:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:05:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:05:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3855 "" "Go-http-client/1.1"
Jan 05 21:05:31 compute-0 openstack_network_exporter[205720]: ERROR   21:05:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:05:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:05:31 compute-0 openstack_network_exporter[205720]: ERROR   21:05:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:05:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:05:31 compute-0 podman[240072]: 2026-01-05 21:05:31.782901514 +0000 UTC m=+0.122430099 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Jan 05 21:05:31 compute-0 podman[240073]: 2026-01-05 21:05:31.791713745 +0000 UTC m=+0.119123134 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:05:40 compute-0 podman[240115]: 2026-01-05 21:05:40.802817872 +0000 UTC m=+0.141953360 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:05:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:05:42.835 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:05:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:05:42.836 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:05:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:05:42.836 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:05:43 compute-0 podman[240138]: 2026-01-05 21:05:43.822112762 +0000 UTC m=+0.160304799 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 05 21:05:46 compute-0 podman[240158]: 2026-01-05 21:05:46.810573345 +0000 UTC m=+0.148818499 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_id=kepler, io.openshift.expose-services=, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, release=1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, managed_by=edpm_ansible, version=9.4, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:05:48 compute-0 podman[240178]: 2026-01-05 21:05:48.765479579 +0000 UTC m=+0.112586433 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, tcib_managed=true, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Jan 05 21:05:55 compute-0 nova_compute[186018]: 2026-01-05 21:05:55.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:05:55 compute-0 nova_compute[186018]: 2026-01-05 21:05:55.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:05:55 compute-0 podman[240199]: 2026-01-05 21:05:55.791103586 +0000 UTC m=+0.135122551 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, architecture=x86_64, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., config_id=openstack_network_exporter, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 05 21:05:57 compute-0 nova_compute[186018]: 2026-01-05 21:05:57.456 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:05:57 compute-0 nova_compute[186018]: 2026-01-05 21:05:57.459 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:05:57 compute-0 nova_compute[186018]: 2026-01-05 21:05:57.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:05:57 compute-0 nova_compute[186018]: 2026-01-05 21:05:57.460 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 21:05:57 compute-0 nova_compute[186018]: 2026-01-05 21:05:57.488 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 05 21:05:58 compute-0 nova_compute[186018]: 2026-01-05 21:05:58.066 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:05:58 compute-0 nova_compute[186018]: 2026-01-05 21:05:58.488 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:05:58 compute-0 nova_compute[186018]: 2026-01-05 21:05:58.489 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:05:58 compute-0 nova_compute[186018]: 2026-01-05 21:05:58.489 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:05:58 compute-0 nova_compute[186018]: 2026-01-05 21:05:58.517 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 21:05:58 compute-0 podman[240219]: 2026-01-05 21:05:58.837714761 +0000 UTC m=+0.182551911 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 05 21:05:59 compute-0 nova_compute[186018]: 2026-01-05 21:05:59.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:05:59 compute-0 podman[202426]: time="2026-01-05T21:05:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:05:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:05:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:05:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:05:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3861 "" "Go-http-client/1.1"
Jan 05 21:06:00 compute-0 nova_compute[186018]: 2026-01-05 21:06:00.494 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:06:00 compute-0 nova_compute[186018]: 2026-01-05 21:06:00.546 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:00 compute-0 nova_compute[186018]: 2026-01-05 21:06:00.547 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:00 compute-0 nova_compute[186018]: 2026-01-05 21:06:00.548 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:00 compute-0 nova_compute[186018]: 2026-01-05 21:06:00.548 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:06:01 compute-0 nova_compute[186018]: 2026-01-05 21:06:01.073 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:06:01 compute-0 nova_compute[186018]: 2026-01-05 21:06:01.075 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5712MB free_disk=72.4768295288086GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:06:01 compute-0 nova_compute[186018]: 2026-01-05 21:06:01.076 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:01 compute-0 nova_compute[186018]: 2026-01-05 21:06:01.077 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:01 compute-0 nova_compute[186018]: 2026-01-05 21:06:01.389 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:06:01 compute-0 nova_compute[186018]: 2026-01-05 21:06:01.390 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:06:01 compute-0 openstack_network_exporter[205720]: ERROR   21:06:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:06:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:06:01 compute-0 openstack_network_exporter[205720]: ERROR   21:06:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:06:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:06:01 compute-0 nova_compute[186018]: 2026-01-05 21:06:01.490 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing inventories for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 21:06:01 compute-0 nova_compute[186018]: 2026-01-05 21:06:01.584 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating ProviderTree inventory for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 21:06:01 compute-0 nova_compute[186018]: 2026-01-05 21:06:01.585 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating inventory in ProviderTree for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 21:06:01 compute-0 nova_compute[186018]: 2026-01-05 21:06:01.604 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing aggregate associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 21:06:01 compute-0 nova_compute[186018]: 2026-01-05 21:06:01.629 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing trait associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_NODE,HW_CPU_X86_BMI,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 21:06:01 compute-0 nova_compute[186018]: 2026-01-05 21:06:01.666 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:06:01 compute-0 nova_compute[186018]: 2026-01-05 21:06:01.681 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:06:01 compute-0 nova_compute[186018]: 2026-01-05 21:06:01.684 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:06:01 compute-0 nova_compute[186018]: 2026-01-05 21:06:01.685 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.608s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:02 compute-0 nova_compute[186018]: 2026-01-05 21:06:02.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:06:02 compute-0 nova_compute[186018]: 2026-01-05 21:06:02.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:06:02 compute-0 nova_compute[186018]: 2026-01-05 21:06:02.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:06:02 compute-0 nova_compute[186018]: 2026-01-05 21:06:02.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 21:06:02 compute-0 podman[240245]: 2026-01-05 21:06:02.720075739 +0000 UTC m=+0.068005497 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 05 21:06:02 compute-0 podman[240246]: 2026-01-05 21:06:02.731719934 +0000 UTC m=+0.073050200 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 21:06:04 compute-0 nova_compute[186018]: 2026-01-05 21:06:04.498 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:06:05 compute-0 nova_compute[186018]: 2026-01-05 21:06:05.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:06:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:06.189 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=3, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=2) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:06:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:06.190 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:06:06 compute-0 nova_compute[186018]: 2026-01-05 21:06:06.457 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:06:11 compute-0 podman[240286]: 2026-01-05 21:06:11.771922502 +0000 UTC m=+0.115154458 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:06:14 compute-0 podman[240310]: 2026-01-05 21:06:14.791097367 +0000 UTC m=+0.130090449 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, tcib_managed=true, org.label-schema.build-date=20251202)
Jan 05 21:06:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:16.193 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '3'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:06:17 compute-0 podman[240330]: 2026-01-05 21:06:17.795456197 +0000 UTC m=+0.131911297 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, release=1214.1726694543, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., release-0.7.12=, container_name=kepler, distribution-scope=public, managed_by=edpm_ansible)
Jan 05 21:06:17 compute-0 nova_compute[186018]: 2026-01-05 21:06:17.854 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:17 compute-0 nova_compute[186018]: 2026-01-05 21:06:17.855 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:17 compute-0 nova_compute[186018]: 2026-01-05 21:06:17.888 186022 DEBUG nova.compute.manager [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.064 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.065 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.078 186022 DEBUG nova.virt.hardware [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.079 186022 INFO nova.compute.claims [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Claim successful on node compute-0.ctlplane.example.com
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.229 186022 DEBUG nova.compute.provider_tree [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.250 186022 DEBUG nova.scheduler.client.report [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.285 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.220s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.287 186022 DEBUG nova.compute.manager [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.339 186022 DEBUG nova.compute.manager [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.340 186022 DEBUG nova.network.neutron [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.372 186022 INFO nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.421 186022 DEBUG nova.compute.manager [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.551 186022 DEBUG nova.compute.manager [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.554 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.555 186022 INFO nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Creating image(s)
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.557 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "/var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.558 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.560 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.561 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.562 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.988 186022 WARNING oslo_policy.policy [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 05 21:06:18 compute-0 nova_compute[186018]: 2026-01-05 21:06:18.989 186022 WARNING oslo_policy.policy [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] JSON formatted policy_file support is deprecated since Victoria release. You need to use YAML format which will be default in future. You can use ``oslopolicy-convert-json-to-yaml`` tool to convert existing JSON-formatted policy file to YAML-formatted in backward compatible way: https://docs.openstack.org/oslo.policy/latest/cli/oslopolicy-convert-json-to-yaml.html.
Jan 05 21:06:19 compute-0 podman[240350]: 2026-01-05 21:06:19.719475816 +0000 UTC m=+0.072937509 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 21:06:19 compute-0 nova_compute[186018]: 2026-01-05 21:06:19.915 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:06:19 compute-0 nova_compute[186018]: 2026-01-05 21:06:19.939 186022 DEBUG nova.network.neutron [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Successfully created port: 9f21c713-156d-4cef-99ef-70022fb8e58b _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 05 21:06:20 compute-0 nova_compute[186018]: 2026-01-05 21:06:20.015 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec.part --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:06:20 compute-0 nova_compute[186018]: 2026-01-05 21:06:20.017 186022 DEBUG nova.virt.images [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] 31cf9c34-2e56-49e9-bb98-955ac3cf9185 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 05 21:06:20 compute-0 nova_compute[186018]: 2026-01-05 21:06:20.019 186022 DEBUG nova.privsep.utils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 05 21:06:20 compute-0 nova_compute[186018]: 2026-01-05 21:06:20.021 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec.part /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:06:20 compute-0 nova_compute[186018]: 2026-01-05 21:06:20.294 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec.part /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec.converted" returned: 0 in 0.273s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:06:20 compute-0 nova_compute[186018]: 2026-01-05 21:06:20.301 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:06:20 compute-0 nova_compute[186018]: 2026-01-05 21:06:20.398 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec.converted --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:06:20 compute-0 nova_compute[186018]: 2026-01-05 21:06:20.401 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.838s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:20 compute-0 nova_compute[186018]: 2026-01-05 21:06:20.426 186022 INFO oslo.privsep.daemon [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'nova.privsep.sys_admin_pctxt', '--privsep_sock_path', '/tmp/tmp633xtumu/privsep.sock']
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.188 186022 INFO oslo.privsep.daemon [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Spawned new privsep daemon via rootwrap
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.031 240388 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.039 240388 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.042 240388 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_CHOWN|CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_FOWNER|CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.043 240388 INFO oslo.privsep.daemon [-] privsep daemon running as pid 240388
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.285 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.381 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.383 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.385 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.415 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.472 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.478 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec,backing_fmt=raw /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.531 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec,backing_fmt=raw /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk 1073741824" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.533 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.148s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.534 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.597 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.598 186022 DEBUG nova.virt.disk.api [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Checking if we can resize image /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.598 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.695 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.696 186022 DEBUG nova.virt.disk.api [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Cannot resize image /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.697 186022 DEBUG nova.objects.instance [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lazy-loading 'migration_context' on Instance uuid f64de408-e6d1-4f7f-9f94-e20a4c83a87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.716 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "/var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.717 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.718 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.719 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.720 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.721 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.765 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f raw /var/lib/nova/instances/_base/ephemeral_1_0706d66 1G" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.767 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.831 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "mkfs -t vfat -n ephemeral0 /var/lib/nova/instances/_base/ephemeral_1_0706d66" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.833 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.113s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.861 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.962 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.964 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.965 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:21 compute-0 nova_compute[186018]: 2026-01-05 21:06:21.988 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.013 186022 DEBUG nova.network.neutron [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Successfully updated port: 9f21c713-156d-4cef-99ef-70022fb8e58b _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.032 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.032 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquired lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.032 186022 DEBUG nova.network.neutron [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.084 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.084 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.123 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 1073741824" returned: 0 in 0.039s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.125 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.160s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.126 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.194 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.195 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.196 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Ensure instance console log exists: /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.196 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.197 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.197 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.245 186022 DEBUG nova.network.neutron [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.442 186022 DEBUG nova.compute.manager [req-b57707d1-16b9-400b-b1ce-653a878fd619 req-5af29024-0e0f-4a98-8a1f-4450cc948990 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Received event network-changed-9f21c713-156d-4cef-99ef-70022fb8e58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.443 186022 DEBUG nova.compute.manager [req-b57707d1-16b9-400b-b1ce-653a878fd619 req-5af29024-0e0f-4a98-8a1f-4450cc948990 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Refreshing instance network info cache due to event network-changed-9f21c713-156d-4cef-99ef-70022fb8e58b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:06:22 compute-0 nova_compute[186018]: 2026-01-05 21:06:22.443 186022 DEBUG oslo_concurrency.lockutils [req-b57707d1-16b9-400b-b1ce-653a878fd619 req-5af29024-0e0f-4a98-8a1f-4450cc948990 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.003 186022 DEBUG nova.network.neutron [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updating instance_info_cache with network_info: [{"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.052 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Releasing lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.053 186022 DEBUG nova.compute.manager [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Instance network_info: |[{"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.054 186022 DEBUG oslo_concurrency.lockutils [req-b57707d1-16b9-400b-b1ce-653a878fd619 req-5af29024-0e0f-4a98-8a1f-4450cc948990 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.055 186022 DEBUG nova.network.neutron [req-b57707d1-16b9-400b-b1ce-653a878fd619 req-5af29024-0e0f-4a98-8a1f-4450cc948990 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Refreshing network info cache for port 9f21c713-156d-4cef-99ef-70022fb8e58b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.062 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Start _get_guest_xml network_info=[{"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T21:05:05Z,direct_url=<?>,disk_format='qcow2',id=31cf9c34-2e56-49e9-bb98-955ac3cf9185,min_disk=0,min_ram=0,name='cirros',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T21:05:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'encrypted': False, 'encryption_format': None, 'image_id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}], 'ephemerals': [{'guest_format': None, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 1, 'encrypted': False, 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.082 186022 WARNING nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.091 186022 DEBUG nova.virt.libvirt.host [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.092 186022 DEBUG nova.virt.libvirt.host [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.098 186022 DEBUG nova.virt.libvirt.host [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.099 186022 DEBUG nova.virt.libvirt.host [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.100 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.101 186022 DEBUG nova.virt.hardware [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T21:05:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='d9d5992a-1c00-4233-a43d-71321ed82446',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T21:05:05Z,direct_url=<?>,disk_format='qcow2',id=31cf9c34-2e56-49e9-bb98-955ac3cf9185,min_disk=0,min_ram=0,name='cirros',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T21:05:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.102 186022 DEBUG nova.virt.hardware [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.102 186022 DEBUG nova.virt.hardware [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.103 186022 DEBUG nova.virt.hardware [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.103 186022 DEBUG nova.virt.hardware [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.104 186022 DEBUG nova.virt.hardware [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.104 186022 DEBUG nova.virt.hardware [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.105 186022 DEBUG nova.virt.hardware [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.105 186022 DEBUG nova.virt.hardware [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.106 186022 DEBUG nova.virt.hardware [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.106 186022 DEBUG nova.virt.hardware [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.111 186022 DEBUG nova.privsep.utils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.113 186022 DEBUG nova.virt.libvirt.vif [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:06:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='704814115a61471f9b45484171f67b5f',ramdisk_id='',reservation_id='r-i94me5j7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:06:18Z,user_data=None,user_id='41f377b42540490198f271301cf5fe90',uuid=f64de408-e6d1-4f7f-9f94-e20a4c83a87a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.114 186022 DEBUG nova.network.os_vif_util [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converting VIF {"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.115 186022 DEBUG nova.network.os_vif_util [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:b1:c7,bridge_name='br-int',has_traffic_filtering=True,id=9f21c713-156d-4cef-99ef-70022fb8e58b,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f21c713-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.117 186022 DEBUG nova.objects.instance [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lazy-loading 'pci_devices' on Instance uuid f64de408-e6d1-4f7f-9f94-e20a4c83a87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.144 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] End _get_guest_xml xml=<domain type="kvm">
Jan 05 21:06:24 compute-0 nova_compute[186018]:   <uuid>f64de408-e6d1-4f7f-9f94-e20a4c83a87a</uuid>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   <name>instance-00000001</name>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   <memory>524288</memory>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   <vcpu>1</vcpu>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   <metadata>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <nova:name>test_0</nova:name>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <nova:creationTime>2026-01-05 21:06:24</nova:creationTime>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <nova:flavor name="m1.small">
Jan 05 21:06:24 compute-0 nova_compute[186018]:         <nova:memory>512</nova:memory>
Jan 05 21:06:24 compute-0 nova_compute[186018]:         <nova:disk>1</nova:disk>
Jan 05 21:06:24 compute-0 nova_compute[186018]:         <nova:swap>0</nova:swap>
Jan 05 21:06:24 compute-0 nova_compute[186018]:         <nova:ephemeral>1</nova:ephemeral>
Jan 05 21:06:24 compute-0 nova_compute[186018]:         <nova:vcpus>1</nova:vcpus>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       </nova:flavor>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <nova:owner>
Jan 05 21:06:24 compute-0 nova_compute[186018]:         <nova:user uuid="41f377b42540490198f271301cf5fe90">admin</nova:user>
Jan 05 21:06:24 compute-0 nova_compute[186018]:         <nova:project uuid="704814115a61471f9b45484171f67b5f">admin</nova:project>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       </nova:owner>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <nova:root type="image" uuid="31cf9c34-2e56-49e9-bb98-955ac3cf9185"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <nova:ports>
Jan 05 21:06:24 compute-0 nova_compute[186018]:         <nova:port uuid="9f21c713-156d-4cef-99ef-70022fb8e58b">
Jan 05 21:06:24 compute-0 nova_compute[186018]:           <nova:ip type="fixed" address="192.168.0.17" ipVersion="4"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:         </nova:port>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       </nova:ports>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     </nova:instance>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   </metadata>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   <sysinfo type="smbios">
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <system>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <entry name="manufacturer">RDO</entry>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <entry name="product">OpenStack Compute</entry>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <entry name="serial">f64de408-e6d1-4f7f-9f94-e20a4c83a87a</entry>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <entry name="uuid">f64de408-e6d1-4f7f-9f94-e20a4c83a87a</entry>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <entry name="family">Virtual Machine</entry>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     </system>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   </sysinfo>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   <os>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <boot dev="hd"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <smbios mode="sysinfo"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   </os>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   <features>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <acpi/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <apic/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <vmcoreinfo/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   </features>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   <clock offset="utc">
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <timer name="hpet" present="no"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   </clock>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   <cpu mode="host-model" match="exact">
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   </cpu>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   <devices>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <target dev="vda" bus="virtio"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <target dev="vdb" bus="virtio"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <disk type="file" device="cdrom">
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.config"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <target dev="sda" bus="sata"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <interface type="ethernet">
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <mac address="fa:16:3e:98:b1:c7"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <mtu size="1442"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <target dev="tap9f21c713-15"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     </interface>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <serial type="pty">
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <log file="/var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/console.log" append="off"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     </serial>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <video>
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     </video>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <input type="tablet" bus="usb"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <rng model="virtio">
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <backend model="random">/dev/urandom</backend>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     </rng>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <controller type="usb" index="0"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     <memballoon model="virtio">
Jan 05 21:06:24 compute-0 nova_compute[186018]:       <stats period="10"/>
Jan 05 21:06:24 compute-0 nova_compute[186018]:     </memballoon>
Jan 05 21:06:24 compute-0 nova_compute[186018]:   </devices>
Jan 05 21:06:24 compute-0 nova_compute[186018]: </domain>
Jan 05 21:06:24 compute-0 nova_compute[186018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.146 186022 DEBUG nova.compute.manager [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Preparing to wait for external event network-vif-plugged-9f21c713-156d-4cef-99ef-70022fb8e58b prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.147 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.148 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.148 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.150 186022 DEBUG nova.virt.libvirt.vif [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:06:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='704814115a61471f9b45484171f67b5f',ramdisk_id='',reservation_id='r-i94me5j7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:06:18Z,user_data=None,user_id='41f377b42540490198f271301cf5fe90',uuid=f64de408-e6d1-4f7f-9f94-e20a4c83a87a,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.150 186022 DEBUG nova.network.os_vif_util [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converting VIF {"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.151 186022 DEBUG nova.network.os_vif_util [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:98:b1:c7,bridge_name='br-int',has_traffic_filtering=True,id=9f21c713-156d-4cef-99ef-70022fb8e58b,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f21c713-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.152 186022 DEBUG os_vif [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:b1:c7,bridge_name='br-int',has_traffic_filtering=True,id=9f21c713-156d-4cef-99ef-70022fb8e58b,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f21c713-15') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.215 186022 DEBUG ovsdbapp.backend.ovs_idl [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Created schema index Interface.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.215 186022 DEBUG ovsdbapp.backend.ovs_idl [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Created schema index Port.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.216 186022 DEBUG ovsdbapp.backend.ovs_idl [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Created schema index Bridge.name autocreate_indices /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/__init__.py:106
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.216 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] tcp:127.0.0.1:6640: entering CONNECTING _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.217 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [POLLOUT] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.218 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] tcp:127.0.0.1:6640: entering ACTIVE _transition /usr/lib64/python3.9/site-packages/ovs/reconnect.py:519
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.219 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.221 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.225 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.236 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.237 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.237 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:06:24 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.239 186022 INFO oslo.privsep.daemon [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running privsep helper: ['sudo', 'nova-rootwrap', '/etc/nova/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/nova/nova.conf', '--config-file', '/etc/nova/nova-compute.conf', '--config-dir', '/etc/nova/nova.conf.d', '--privsep_context', 'vif_plug_ovs.privsep.vif_plug', '--privsep_sock_path', '/tmp/tmpj_er9aox/privsep.sock']
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:25.005 186022 INFO oslo.privsep.daemon [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Spawned new privsep daemon via rootwrap
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.873 240425 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.877 240425 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.879 240425 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_NET_ADMIN/CAP_DAC_OVERRIDE|CAP_NET_ADMIN/none
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:24.879 240425 INFO oslo.privsep.daemon [-] privsep daemon running as pid 240425
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:25.343 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:25.344 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9f21c713-15, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:25.345 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9f21c713-15, col_values=(('external_ids', {'iface-id': '9f21c713-156d-4cef-99ef-70022fb8e58b', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:98:b1:c7', 'vm-uuid': 'f64de408-e6d1-4f7f-9f94-e20a4c83a87a'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:25.348 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:25 compute-0 NetworkManager[56598]: <info>  [1767647185.3503] manager: (tap9f21c713-15): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/19)
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:25.351 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:25.364 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:25.366 186022 INFO os_vif [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:98:b1:c7,bridge_name='br-int',has_traffic_filtering=True,id=9f21c713-156d-4cef-99ef-70022fb8e58b,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f21c713-15')
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:25.456 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:25.457 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:25.457 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:25.458 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No VIF found with MAC fa:16:3e:98:b1:c7, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 21:06:25 compute-0 nova_compute[186018]: 2026-01-05 21:06:25.460 186022 INFO nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Using config drive
Jan 05 21:06:26 compute-0 nova_compute[186018]: 2026-01-05 21:06:26.000 186022 DEBUG nova.network.neutron [req-b57707d1-16b9-400b-b1ce-653a878fd619 req-5af29024-0e0f-4a98-8a1f-4450cc948990 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updated VIF entry in instance network info cache for port 9f21c713-156d-4cef-99ef-70022fb8e58b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:06:26 compute-0 nova_compute[186018]: 2026-01-05 21:06:26.002 186022 DEBUG nova.network.neutron [req-b57707d1-16b9-400b-b1ce-653a878fd619 req-5af29024-0e0f-4a98-8a1f-4450cc948990 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updating instance_info_cache with network_info: [{"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:06:26 compute-0 nova_compute[186018]: 2026-01-05 21:06:26.039 186022 DEBUG oslo_concurrency.lockutils [req-b57707d1-16b9-400b-b1ce-653a878fd619 req-5af29024-0e0f-4a98-8a1f-4450cc948990 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:06:26 compute-0 nova_compute[186018]: 2026-01-05 21:06:26.326 186022 INFO nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Creating config drive at /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.config
Jan 05 21:06:26 compute-0 nova_compute[186018]: 2026-01-05 21:06:26.340 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphmxnomth execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:06:26 compute-0 nova_compute[186018]: 2026-01-05 21:06:26.489 186022 DEBUG oslo_concurrency.processutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmphmxnomth" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:06:26 compute-0 kernel: tun: Universal TUN/TAP device driver, 1.6
Jan 05 21:06:26 compute-0 NetworkManager[56598]: <info>  [1767647186.6699] manager: (tap9f21c713-15): new Tun device (/org/freedesktop/NetworkManager/Devices/20)
Jan 05 21:06:26 compute-0 kernel: tap9f21c713-15: entered promiscuous mode
Jan 05 21:06:26 compute-0 ovn_controller[98229]: 2026-01-05T21:06:26Z|00033|binding|INFO|Claiming lport 9f21c713-156d-4cef-99ef-70022fb8e58b for this chassis.
Jan 05 21:06:26 compute-0 ovn_controller[98229]: 2026-01-05T21:06:26Z|00034|binding|INFO|9f21c713-156d-4cef-99ef-70022fb8e58b: Claiming fa:16:3e:98:b1:c7 192.168.0.17
Jan 05 21:06:26 compute-0 nova_compute[186018]: 2026-01-05 21:06:26.691 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:26 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:26.706 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:b1:c7 192.168.0.17'], port_security=['fa:16:3e:98:b1:c7 192.168.0.17'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.17/24', 'neutron:device_id': 'f64de408-e6d1-4f7f-9f94-e20a4c83a87a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '704814115a61471f9b45484171f67b5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '02c7eb5a-98f1-49fb-80bc-9ee05faa964b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0df9bc1d-5579-4059-ac66-a97b4c7350db, chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=9f21c713-156d-4cef-99ef-70022fb8e58b) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:06:26 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:26.707 107689 INFO neutron.agent.ovn.metadata.agent [-] Port 9f21c713-156d-4cef-99ef-70022fb8e58b in datapath b871481f-0445-42f2-8b6a-2e8572ae5b49 bound to our chassis
Jan 05 21:06:26 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:26.709 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b871481f-0445-42f2-8b6a-2e8572ae5b49
Jan 05 21:06:26 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:26.711 107689 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.default', '--privsep_sock_path', '/tmp/tmp9aadnj6k/privsep.sock']
Jan 05 21:06:26 compute-0 systemd-udevd[240465]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:06:26 compute-0 NetworkManager[56598]: <info>  [1767647186.7422] device (tap9f21c713-15): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 21:06:26 compute-0 NetworkManager[56598]: <info>  [1767647186.7434] device (tap9f21c713-15): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 21:06:26 compute-0 nova_compute[186018]: 2026-01-05 21:06:26.782 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:26 compute-0 systemd-machined[157312]: New machine qemu-1-instance-00000001.
Jan 05 21:06:26 compute-0 podman[240441]: 2026-01-05 21:06:26.786406992 +0000 UTC m=+0.183542272 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_id=openstack_network_exporter, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=minimal rhel9, io.buildah.version=1.33.7, vcs-type=git, build-date=2025-08-20T13:12:41, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.openshift.expose-services=, version=9.6, container_name=openstack_network_exporter)
Jan 05 21:06:26 compute-0 ovn_controller[98229]: 2026-01-05T21:06:26Z|00035|binding|INFO|Setting lport 9f21c713-156d-4cef-99ef-70022fb8e58b ovn-installed in OVS
Jan 05 21:06:26 compute-0 ovn_controller[98229]: 2026-01-05T21:06:26Z|00036|binding|INFO|Setting lport 9f21c713-156d-4cef-99ef-70022fb8e58b up in Southbound
Jan 05 21:06:26 compute-0 nova_compute[186018]: 2026-01-05 21:06:26.791 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:26 compute-0 systemd[1]: Started Virtual Machine qemu-1-instance-00000001.
Jan 05 21:06:27 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:27.438 107689 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 05 21:06:27 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:27.439 107689 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmp9aadnj6k/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 05 21:06:27 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:27.277 240489 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 05 21:06:27 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:27.286 240489 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 05 21:06:27 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:27.290 240489 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/CAP_DAC_OVERRIDE|CAP_DAC_READ_SEARCH|CAP_NET_ADMIN|CAP_SYS_ADMIN|CAP_SYS_PTRACE/none
Jan 05 21:06:27 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:27.291 240489 INFO oslo.privsep.daemon [-] privsep daemon running as pid 240489
Jan 05 21:06:27 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:27.444 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[29af79cd-b44b-4fc1-a48a-1c79a13a83e9]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.587 186022 DEBUG nova.compute.manager [req-1457c3e5-aee3-4fe7-90d0-a12dbfc7f61a req-50877391-b89a-4544-9ed3-fcc540176fcf 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Received event network-vif-plugged-9f21c713-156d-4cef-99ef-70022fb8e58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.588 186022 DEBUG oslo_concurrency.lockutils [req-1457c3e5-aee3-4fe7-90d0-a12dbfc7f61a req-50877391-b89a-4544-9ed3-fcc540176fcf 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.588 186022 DEBUG oslo_concurrency.lockutils [req-1457c3e5-aee3-4fe7-90d0-a12dbfc7f61a req-50877391-b89a-4544-9ed3-fcc540176fcf 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.588 186022 DEBUG oslo_concurrency.lockutils [req-1457c3e5-aee3-4fe7-90d0-a12dbfc7f61a req-50877391-b89a-4544-9ed3-fcc540176fcf 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.588 186022 DEBUG nova.compute.manager [req-1457c3e5-aee3-4fe7-90d0-a12dbfc7f61a req-50877391-b89a-4544-9ed3-fcc540176fcf 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Processing event network-vif-plugged-9f21c713-156d-4cef-99ef-70022fb8e58b _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.706 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767647187.704657, f64de408-e6d1-4f7f-9f94-e20a4c83a87a => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.720 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] VM Started (Lifecycle Event)
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.726 186022 DEBUG nova.compute.manager [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.734 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.742 186022 INFO nova.virt.libvirt.driver [-] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Instance spawned successfully.
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.743 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.758 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.773 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.780 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.781 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.782 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.784 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.785 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.786 186022 DEBUG nova.virt.libvirt.driver [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.795 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.796 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767647187.7048926, f64de408-e6d1-4f7f-9f94-e20a4c83a87a => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.796 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] VM Paused (Lifecycle Event)
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.816 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.826 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767647187.7319584, f64de408-e6d1-4f7f-9f94-e20a4c83a87a => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.827 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] VM Resumed (Lifecycle Event)
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.870 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.878 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.919 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.931 186022 INFO nova.compute.manager [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Took 9.38 seconds to spawn the instance on the hypervisor.
Jan 05 21:06:27 compute-0 nova_compute[186018]: 2026-01-05 21:06:27.933 186022 DEBUG nova.compute.manager [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:06:28 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:28.006 240489 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:28 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:28.008 240489 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:28 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:28.008 240489 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:28 compute-0 nova_compute[186018]: 2026-01-05 21:06:28.009 186022 INFO nova.compute.manager [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Took 10.02 seconds to build instance.
Jan 05 21:06:28 compute-0 nova_compute[186018]: 2026-01-05 21:06:28.044 186022 DEBUG oslo_concurrency.lockutils [None req-f8b47b94-7b10-42e1-a989-088dad53a06b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.188s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:28 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:28.638 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[0625c0f6-c3a4-416e-9275-2901116ac7ac]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:28 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:28.641 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapb871481f-01 in ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 05 21:06:28 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:28.644 240489 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapb871481f-00 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 05 21:06:28 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:28.645 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[3b5e90a5-a346-474c-8354-21019f1d7766]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:28 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:28.650 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[f5e0131d-7dcb-42cc-ad72-dc8f9bb27f35]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:28 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:28.698 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[015111d5-d56c-4450-a4bf-16404f8ad31e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:28 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:28.726 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[e5be4ce1-e4e0-454b-907b-b752a76bdab5]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:28 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:28.730 107689 INFO oslo.privsep.daemon [-] Running privsep helper: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'privsep-helper', '--config-file', '/etc/neutron/neutron.conf', '--config-dir', '/etc/neutron.conf.d', '--privsep_context', 'neutron.privileged.link_cmd', '--privsep_sock_path', '/tmp/tmpco1yfb09/privsep.sock']
Jan 05 21:06:29 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 05 21:06:29 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 05 21:06:29 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:29.549 107689 INFO oslo.privsep.daemon [-] Spawned new privsep daemon via rootwrap
Jan 05 21:06:29 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:29.550 107689 DEBUG oslo.privsep.daemon [-] Accepted privsep connection to /tmp/tmpco1yfb09/privsep.sock __init__ /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:362
Jan 05 21:06:29 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:29.402 240510 INFO oslo.privsep.daemon [-] privsep daemon starting
Jan 05 21:06:29 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:29.410 240510 INFO oslo.privsep.daemon [-] privsep process running with uid/gid: 0/0
Jan 05 21:06:29 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:29.414 240510 INFO oslo.privsep.daemon [-] privsep process running with capabilities (eff/prm/inh): CAP_NET_ADMIN|CAP_SYS_ADMIN/CAP_NET_ADMIN|CAP_SYS_ADMIN/none
Jan 05 21:06:29 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:29.414 240510 INFO oslo.privsep.daemon [-] privsep daemon running as pid 240510
Jan 05 21:06:29 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:29.554 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[4165f344-a785-4b34-8f3c-e821776a4505]: (2,) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:29 compute-0 podman[240511]: 2026-01-05 21:06:29.580046393 +0000 UTC m=+0.143097693 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:06:29 compute-0 nova_compute[186018]: 2026-01-05 21:06:29.663 186022 DEBUG nova.compute.manager [req-ff289b45-7db0-463b-8bbe-94be8a370d72 req-298d24f4-84fc-45b8-b97b-b22e01834066 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Received event network-vif-plugged-9f21c713-156d-4cef-99ef-70022fb8e58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:06:29 compute-0 nova_compute[186018]: 2026-01-05 21:06:29.664 186022 DEBUG oslo_concurrency.lockutils [req-ff289b45-7db0-463b-8bbe-94be8a370d72 req-298d24f4-84fc-45b8-b97b-b22e01834066 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:29 compute-0 nova_compute[186018]: 2026-01-05 21:06:29.664 186022 DEBUG oslo_concurrency.lockutils [req-ff289b45-7db0-463b-8bbe-94be8a370d72 req-298d24f4-84fc-45b8-b97b-b22e01834066 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:29 compute-0 nova_compute[186018]: 2026-01-05 21:06:29.665 186022 DEBUG oslo_concurrency.lockutils [req-ff289b45-7db0-463b-8bbe-94be8a370d72 req-298d24f4-84fc-45b8-b97b-b22e01834066 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:29 compute-0 nova_compute[186018]: 2026-01-05 21:06:29.665 186022 DEBUG nova.compute.manager [req-ff289b45-7db0-463b-8bbe-94be8a370d72 req-298d24f4-84fc-45b8-b97b-b22e01834066 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] No waiting events found dispatching network-vif-plugged-9f21c713-156d-4cef-99ef-70022fb8e58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:06:29 compute-0 nova_compute[186018]: 2026-01-05 21:06:29.666 186022 WARNING nova.compute.manager [req-ff289b45-7db0-463b-8bbe-94be8a370d72 req-298d24f4-84fc-45b8-b97b-b22e01834066 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Received unexpected event network-vif-plugged-9f21c713-156d-4cef-99ef-70022fb8e58b for instance with vm_state active and task_state None.
Jan 05 21:06:29 compute-0 podman[202426]: time="2026-01-05T21:06:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:06:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:06:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:06:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:06:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3863 "" "Go-http-client/1.1"
Jan 05 21:06:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:30.082 240510 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "context-manager" by "neutron_lib.db.api._create_context_manager" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:30.082 240510 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" acquired by "neutron_lib.db.api._create_context_manager" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:30.082 240510 DEBUG oslo_concurrency.lockutils [-] Lock "context-manager" "released" by "neutron_lib.db.api._create_context_manager" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:30 compute-0 nova_compute[186018]: 2026-01-05 21:06:30.349 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:30.674 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[80139e6c-4db1-403b-b1b7-e6e797fcaf5f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:30.710 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[118d7eee-9b1e-4e96-a3cc-50cc07402da5]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:30 compute-0 NetworkManager[56598]: <info>  [1767647190.7153] manager: (tapb871481f-00): new Veth device (/org/freedesktop/NetworkManager/Devices/21)
Jan 05 21:06:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:30.766 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[5fc643ba-fb85-4ccd-bd51-b81e5760da2f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:30.775 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[e19df490-91df-43d4-a8a1-0964bc801ba0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:30 compute-0 systemd-udevd[240567]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:06:30 compute-0 NetworkManager[56598]: <info>  [1767647190.8127] device (tapb871481f-00): carrier: link connected
Jan 05 21:06:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:30.830 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[d546d488-d998-4faf-8c27-e0ac70668469]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:30.861 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[8384769e-8e03-4134-b130-061948e4f9c5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb871481f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:f0:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393151, 'reachable_time': 17968, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 240585, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:30.885 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[aa636390-ac77-429f-9d19-32e489b7e0f0]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe97:f0d4'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393151, 'tstamp': 393151}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 240586, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:30.907 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[f4fe6af4-fb2f-4c3c-b73f-47990f763907]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb871481f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:f0:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 2, 'tx_packets': 1, 'rx_bytes': 176, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393151, 'reachable_time': 17968, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 2, 'inoctets': 148, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 2, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 148, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 2, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 240587, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:30.960 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[54f01fbb-d0ad-492a-9412-6d8a6315680c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:31.057 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[dc4cfab5-dc8f-4c32-b416-18a8ad319809]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:31.060 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb871481f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:31.062 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:31.063 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb871481f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:06:31 compute-0 kernel: tapb871481f-00: entered promiscuous mode
Jan 05 21:06:31 compute-0 nova_compute[186018]: 2026-01-05 21:06:31.067 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:31.072 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb871481f-00, col_values=(('external_ids', {'iface-id': 'a16ac18f-2e71-4427-b368-840ecfba3d33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:06:31 compute-0 nova_compute[186018]: 2026-01-05 21:06:31.075 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:31 compute-0 ovn_controller[98229]: 2026-01-05T21:06:31Z|00037|binding|INFO|Releasing lport a16ac18f-2e71-4427-b368-840ecfba3d33 from this chassis (sb_readonly=0)
Jan 05 21:06:31 compute-0 NetworkManager[56598]: <info>  [1767647191.0788] manager: (tapb871481f-00): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/22)
Jan 05 21:06:31 compute-0 nova_compute[186018]: 2026-01-05 21:06:31.103 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:31.105 107689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/b871481f-0445-42f2-8b6a-2e8572ae5b49.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/b871481f-0445-42f2-8b6a-2e8572ae5b49.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:31.107 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[55701899-47f4-45bf-828e-077000170cbe]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:31.112 107689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]: global
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     log         /dev/log local0 debug
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     log-tag     haproxy-metadata-proxy-b871481f-0445-42f2-8b6a-2e8572ae5b49
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     user        root
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     group       root
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     maxconn     1024
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     pidfile     /var/lib/neutron/external/pids/b871481f-0445-42f2-8b6a-2e8572ae5b49.pid.haproxy
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     daemon
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]: defaults
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     log global
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     mode http
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     option httplog
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     option dontlognull
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     option http-server-close
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     option forwardfor
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     retries                 3
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     timeout http-request    30s
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     timeout connect         30s
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     timeout client          32s
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     timeout server          32s
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     timeout http-keep-alive 30s
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]: listen listener
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     bind 169.254.169.254:80
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     server metadata /var/lib/neutron/metadata_proxy
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:     http-request add-header X-OVN-Network-ID b871481f-0445-42f2-8b6a-2e8572ae5b49
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 05 21:06:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:31.114 107689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'env', 'PROCESS_TAG=haproxy-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/b871481f-0445-42f2-8b6a-2e8572ae5b49.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 05 21:06:31 compute-0 openstack_network_exporter[205720]: ERROR   21:06:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:06:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:06:31 compute-0 openstack_network_exporter[205720]: ERROR   21:06:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:06:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:06:31 compute-0 podman[240618]: 2026-01-05 21:06:31.620900969 +0000 UTC m=+0.087591076 container create 233717ab13ddd74f7a4eca20c3a8fa2832e22941efa44351559dfcb3517e1b01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 05 21:06:31 compute-0 podman[240618]: 2026-01-05 21:06:31.578142989 +0000 UTC m=+0.044833156 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 21:06:31 compute-0 systemd[1]: Started libpod-conmon-233717ab13ddd74f7a4eca20c3a8fa2832e22941efa44351559dfcb3517e1b01.scope.
Jan 05 21:06:31 compute-0 systemd[1]: Started libcrun container.
Jan 05 21:06:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/cda2b6ea0f460aebf5e928b181d66261e487beffffac1eb57115d75f78f4611c/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 21:06:31 compute-0 nova_compute[186018]: 2026-01-05 21:06:31.780 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:31 compute-0 podman[240618]: 2026-01-05 21:06:31.786911996 +0000 UTC m=+0.253602363 container init 233717ab13ddd74f7a4eca20c3a8fa2832e22941efa44351559dfcb3517e1b01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:06:31 compute-0 podman[240618]: 2026-01-05 21:06:31.802374695 +0000 UTC m=+0.269064842 container start 233717ab13ddd74f7a4eca20c3a8fa2832e22941efa44351559dfcb3517e1b01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0)
Jan 05 21:06:31 compute-0 neutron-haproxy-ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49[240632]: [NOTICE]   (240637) : New worker (240639) forked
Jan 05 21:06:31 compute-0 neutron-haproxy-ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49[240632]: [NOTICE]   (240637) : Loading success.
Jan 05 21:06:33 compute-0 podman[240648]: 2026-01-05 21:06:33.778924621 +0000 UTC m=+0.122461727 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 21:06:33 compute-0 podman[240649]: 2026-01-05 21:06:33.830278449 +0000 UTC m=+0.170506598 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:06:35 compute-0 nova_compute[186018]: 2026-01-05 21:06:35.354 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:36 compute-0 nova_compute[186018]: 2026-01-05 21:06:36.784 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:40 compute-0 nova_compute[186018]: 2026-01-05 21:06:40.358 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:41 compute-0 nova_compute[186018]: 2026-01-05 21:06:41.787 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:42 compute-0 podman[240689]: 2026-01-05 21:06:42.76204102 +0000 UTC m=+0.103334952 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:06:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:42.836 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:06:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:42.837 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:06:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:06:42.837 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:06:44 compute-0 ovn_controller[98229]: 2026-01-05T21:06:44Z|00038|binding|INFO|Releasing lport a16ac18f-2e71-4427-b368-840ecfba3d33 from this chassis (sb_readonly=0)
Jan 05 21:06:44 compute-0 nova_compute[186018]: 2026-01-05 21:06:44.084 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:44 compute-0 NetworkManager[56598]: <info>  [1767647204.1031] manager: (patch-br-int-to-provnet-f8df9651-98ab-4571-aafb-53926ee41805): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/23)
Jan 05 21:06:44 compute-0 NetworkManager[56598]: <info>  [1767647204.1039] device (patch-br-int-to-provnet-f8df9651-98ab-4571-aafb-53926ee41805)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 21:06:44 compute-0 NetworkManager[56598]: <warn>  [1767647204.1045] device (patch-br-int-to-provnet-f8df9651-98ab-4571-aafb-53926ee41805)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 05 21:06:44 compute-0 NetworkManager[56598]: <info>  [1767647204.1054] manager: (patch-provnet-f8df9651-98ab-4571-aafb-53926ee41805-to-br-int): new Open vSwitch Interface device (/org/freedesktop/NetworkManager/Devices/24)
Jan 05 21:06:44 compute-0 NetworkManager[56598]: <info>  [1767647204.1067] device (patch-provnet-f8df9651-98ab-4571-aafb-53926ee41805-to-br-int)[Open vSwitch Interface]: state change: unmanaged -> unavailable (reason 'managed', managed-type: 'external')
Jan 05 21:06:44 compute-0 NetworkManager[56598]: <warn>  [1767647204.1069] device (patch-provnet-f8df9651-98ab-4571-aafb-53926ee41805-to-br-int)[Open vSwitch Interface]: error setting IPv4 forwarding to '1': No such file or directory
Jan 05 21:06:44 compute-0 NetworkManager[56598]: <info>  [1767647204.1085] manager: (patch-provnet-f8df9651-98ab-4571-aafb-53926ee41805-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/25)
Jan 05 21:06:44 compute-0 NetworkManager[56598]: <info>  [1767647204.1103] manager: (patch-br-int-to-provnet-f8df9651-98ab-4571-aafb-53926ee41805): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/26)
Jan 05 21:06:44 compute-0 NetworkManager[56598]: <info>  [1767647204.1113] device (patch-br-int-to-provnet-f8df9651-98ab-4571-aafb-53926ee41805)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 05 21:06:44 compute-0 NetworkManager[56598]: <info>  [1767647204.1117] device (patch-provnet-f8df9651-98ab-4571-aafb-53926ee41805-to-br-int)[Open vSwitch Interface]: state change: unavailable -> disconnected (reason 'none', managed-type: 'full')
Jan 05 21:06:44 compute-0 ovn_controller[98229]: 2026-01-05T21:06:44Z|00039|binding|INFO|Releasing lport a16ac18f-2e71-4427-b368-840ecfba3d33 from this chassis (sb_readonly=0)
Jan 05 21:06:44 compute-0 nova_compute[186018]: 2026-01-05 21:06:44.150 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:44 compute-0 nova_compute[186018]: 2026-01-05 21:06:44.158 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:44 compute-0 nova_compute[186018]: 2026-01-05 21:06:44.447 186022 DEBUG nova.compute.manager [req-8b17ce70-2b5a-4f61-a665-94a3d6abb9db req-b19fe0fa-2552-4f2f-a53a-cc0fa2494859 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Received event network-changed-9f21c713-156d-4cef-99ef-70022fb8e58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:06:44 compute-0 nova_compute[186018]: 2026-01-05 21:06:44.447 186022 DEBUG nova.compute.manager [req-8b17ce70-2b5a-4f61-a665-94a3d6abb9db req-b19fe0fa-2552-4f2f-a53a-cc0fa2494859 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Refreshing instance network info cache due to event network-changed-9f21c713-156d-4cef-99ef-70022fb8e58b. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:06:44 compute-0 nova_compute[186018]: 2026-01-05 21:06:44.448 186022 DEBUG oslo_concurrency.lockutils [req-8b17ce70-2b5a-4f61-a665-94a3d6abb9db req-b19fe0fa-2552-4f2f-a53a-cc0fa2494859 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:06:44 compute-0 nova_compute[186018]: 2026-01-05 21:06:44.448 186022 DEBUG oslo_concurrency.lockutils [req-8b17ce70-2b5a-4f61-a665-94a3d6abb9db req-b19fe0fa-2552-4f2f-a53a-cc0fa2494859 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:06:44 compute-0 nova_compute[186018]: 2026-01-05 21:06:44.449 186022 DEBUG nova.network.neutron [req-8b17ce70-2b5a-4f61-a665-94a3d6abb9db req-b19fe0fa-2552-4f2f-a53a-cc0fa2494859 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Refreshing network info cache for port 9f21c713-156d-4cef-99ef-70022fb8e58b _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:06:45 compute-0 nova_compute[186018]: 2026-01-05 21:06:45.361 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:45 compute-0 podman[240711]: 2026-01-05 21:06:45.797945794 +0000 UTC m=+0.137061093 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 05 21:06:45 compute-0 nova_compute[186018]: 2026-01-05 21:06:45.994 186022 DEBUG nova.network.neutron [req-8b17ce70-2b5a-4f61-a665-94a3d6abb9db req-b19fe0fa-2552-4f2f-a53a-cc0fa2494859 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updated VIF entry in instance network info cache for port 9f21c713-156d-4cef-99ef-70022fb8e58b. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:06:45 compute-0 nova_compute[186018]: 2026-01-05 21:06:45.995 186022 DEBUG nova.network.neutron [req-8b17ce70-2b5a-4f61-a665-94a3d6abb9db req-b19fe0fa-2552-4f2f-a53a-cc0fa2494859 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updating instance_info_cache with network_info: [{"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:06:46 compute-0 nova_compute[186018]: 2026-01-05 21:06:46.035 186022 DEBUG oslo_concurrency.lockutils [req-8b17ce70-2b5a-4f61-a665-94a3d6abb9db req-b19fe0fa-2552-4f2f-a53a-cc0fa2494859 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:06:46 compute-0 nova_compute[186018]: 2026-01-05 21:06:46.790 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:48 compute-0 podman[240730]: 2026-01-05 21:06:48.809118974 +0000 UTC m=+0.149089021 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, version=9.4, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible, name=ubi9, architecture=x86_64, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler)
Jan 05 21:06:50 compute-0 nova_compute[186018]: 2026-01-05 21:06:50.364 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:50 compute-0 podman[240751]: 2026-01-05 21:06:50.833512776 +0000 UTC m=+0.173464816 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, io.buildah.version=1.41.4)
Jan 05 21:06:51 compute-0 nova_compute[186018]: 2026-01-05 21:06:51.794 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:55 compute-0 nova_compute[186018]: 2026-01-05 21:06:55.368 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:55 compute-0 nova_compute[186018]: 2026-01-05 21:06:55.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:06:55 compute-0 nova_compute[186018]: 2026-01-05 21:06:55.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:06:56 compute-0 nova_compute[186018]: 2026-01-05 21:06:56.796 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:06:57 compute-0 nova_compute[186018]: 2026-01-05 21:06:57.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:06:57 compute-0 podman[240769]: 2026-01-05 21:06:57.762644791 +0000 UTC m=+0.115701889 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, vcs-type=git, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, version=9.6, config_id=openstack_network_exporter, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc.)
Jan 05 21:06:59 compute-0 nova_compute[186018]: 2026-01-05 21:06:59.457 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:06:59 compute-0 nova_compute[186018]: 2026-01-05 21:06:59.459 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:06:59 compute-0 nova_compute[186018]: 2026-01-05 21:06:59.460 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:06:59 compute-0 nova_compute[186018]: 2026-01-05 21:06:59.460 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:06:59 compute-0 podman[202426]: time="2026-01-05T21:06:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:06:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:06:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:06:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:06:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4348 "" "Go-http-client/1.1"
Jan 05 21:06:59 compute-0 podman[240789]: 2026-01-05 21:06:59.782710328 +0000 UTC m=+0.135111452 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible)
Jan 05 21:06:59 compute-0 nova_compute[186018]: 2026-01-05 21:06:59.811 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:06:59 compute-0 nova_compute[186018]: 2026-01-05 21:06:59.812 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:06:59 compute-0 nova_compute[186018]: 2026-01-05 21:06:59.812 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:06:59 compute-0 nova_compute[186018]: 2026-01-05 21:06:59.812 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f64de408-e6d1-4f7f-9f94-e20a4c83a87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:07:00 compute-0 nova_compute[186018]: 2026-01-05 21:07:00.372 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:00 compute-0 ovn_controller[98229]: 2026-01-05T21:07:00Z|00004|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:98:b1:c7 192.168.0.17
Jan 05 21:07:00 compute-0 ovn_controller[98229]: 2026-01-05T21:07:00Z|00005|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:98:b1:c7 192.168.0.17
Jan 05 21:07:01 compute-0 nova_compute[186018]: 2026-01-05 21:07:01.151 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updating instance_info_cache with network_info: [{"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:07:01 compute-0 nova_compute[186018]: 2026-01-05 21:07:01.177 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:07:01 compute-0 nova_compute[186018]: 2026-01-05 21:07:01.177 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:07:01 compute-0 openstack_network_exporter[205720]: ERROR   21:07:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:07:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:07:01 compute-0 openstack_network_exporter[205720]: ERROR   21:07:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:07:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:07:01 compute-0 nova_compute[186018]: 2026-01-05 21:07:01.459 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:07:01 compute-0 nova_compute[186018]: 2026-01-05 21:07:01.799 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:02 compute-0 nova_compute[186018]: 2026-01-05 21:07:02.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:07:02 compute-0 nova_compute[186018]: 2026-01-05 21:07:02.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:07:02 compute-0 nova_compute[186018]: 2026-01-05 21:07:02.514 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:07:02 compute-0 nova_compute[186018]: 2026-01-05 21:07:02.515 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:07:02 compute-0 nova_compute[186018]: 2026-01-05 21:07:02.516 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:07:02 compute-0 nova_compute[186018]: 2026-01-05 21:07:02.517 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:07:02 compute-0 nova_compute[186018]: 2026-01-05 21:07:02.672 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:07:02 compute-0 nova_compute[186018]: 2026-01-05 21:07:02.730 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:07:02 compute-0 nova_compute[186018]: 2026-01-05 21:07:02.731 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:07:02 compute-0 nova_compute[186018]: 2026-01-05 21:07:02.824 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:07:02 compute-0 nova_compute[186018]: 2026-01-05 21:07:02.825 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:07:02 compute-0 nova_compute[186018]: 2026-01-05 21:07:02.906 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:07:02 compute-0 nova_compute[186018]: 2026-01-05 21:07:02.909 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:07:02 compute-0 nova_compute[186018]: 2026-01-05 21:07:02.985 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.380 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.381 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5230MB free_disk=72.42489242553711GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.381 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.382 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.491 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.491 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.492 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.562 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating inventory in ProviderTree for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.609 186022 ERROR nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [req-76a4c886-7b87-4174-a8a4-9304ea325df1] Failed to update inventory to [{'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}}] for resource provider with UUID 98d67ab0-e613-4c26-9eaa-22cf91b060a7.  Got 409: {"errors": [{"status": 409, "title": "Conflict", "detail": "There was a conflict when trying to complete your request.\n\n resource provider generation conflict  ", "code": "placement.concurrent_update", "request_id": "req-76a4c886-7b87-4174-a8a4-9304ea325df1"}]}
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.642 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing inventories for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.662 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating ProviderTree inventory for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.663 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating inventory in ProviderTree for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 0, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.683 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing aggregate associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.706 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing trait associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_NODE,HW_CPU_X86_BMI,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.765 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating inventory in ProviderTree for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with inventory: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.829 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updated inventory for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with generation 3 in Placement from set_inventory_for_provider using data: {'MEMORY_MB': {'total': 7679, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0, 'reserved': 512}, 'VCPU': {'total': 8, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0, 'reserved': 0}, 'DISK_GB': {'total': 79, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9, 'reserved': 1}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:957
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.831 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 generation from 3 to 4 during operation: update_inventory _update_generation /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:164
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.831 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating inventory in ProviderTree for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with inventory: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.858 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:07:03 compute-0 nova_compute[186018]: 2026-01-05 21:07:03.859 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.477s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:07:04 compute-0 podman[240840]: 2026-01-05 21:07:04.734850853 +0000 UTC m=+0.082021119 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 05 21:07:04 compute-0 podman[240841]: 2026-01-05 21:07:04.787024082 +0000 UTC m=+0.113301576 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:07:05 compute-0 nova_compute[186018]: 2026-01-05 21:07:05.377 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:05 compute-0 nova_compute[186018]: 2026-01-05 21:07:05.861 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:07:05 compute-0 nova_compute[186018]: 2026-01-05 21:07:05.862 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:07:06 compute-0 nova_compute[186018]: 2026-01-05 21:07:06.803 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.778 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.779 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.779 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.793 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.805 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.805 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.805 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.805 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.807 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:07.807 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c440f80>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:07:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:08.135 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/f64de408-e6d1-4f7f-9f94-e20a4c83a87a -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f276ecb8e60cef1797549a0d2bcc21ef3546f9ad65f5da0e31c0a93bf2cbb910" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.108 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1849 Content-Type: application/json Date: Mon, 05 Jan 2026 21:07:08 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-c43f5502-9d00-4560-b664-b486ebcdc71f x-openstack-request-id: req-c43f5502-9d00-4560-b664-b486ebcdc71f _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.109 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "f64de408-e6d1-4f7f-9f94-e20a4c83a87a", "name": "test_0", "status": "ACTIVE", "tenant_id": "704814115a61471f9b45484171f67b5f", "user_id": "41f377b42540490198f271301cf5fe90", "metadata": {}, "hostId": "cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424", "image": {"id": "31cf9c34-2e56-49e9-bb98-955ac3cf9185", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/31cf9c34-2e56-49e9-bb98-955ac3cf9185"}]}, "flavor": {"id": "d9d5992a-1c00-4233-a43d-71321ed82446", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/d9d5992a-1c00-4233-a43d-71321ed82446"}]}, "created": "2026-01-05T21:06:15Z", "updated": "2026-01-05T21:06:27Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.17", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:98:b1:c7"}, {"version": 4, "addr": "192.168.122.227", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:98:b1:c7"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/f64de408-e6d1-4f7f-9f94-e20a4c83a87a"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/f64de408-e6d1-4f7f-9f94-e20a4c83a87a"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-05T21:06:27.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000001", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.109 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/f64de408-e6d1-4f7f-9f94-e20a4c83a87a used request id req-c43f5502-9d00-4560-b664-b486ebcdc71f request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.112 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f64de408-e6d1-4f7f-9f94-e20a4c83a87a', 'name': 'test_0', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.112 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.113 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.115 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.116 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:07:09.114299) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.118 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.118 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.119 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.119 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.120 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:07:09.119457) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.127 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for f64de408-e6d1-4f7f-9f94-e20a4c83a87a / tap9f21c713-15 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.127 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets volume: 15 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.128 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.128 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.129 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.129 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.129 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.130 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.130 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.130 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:07:09.130126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.131 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.131 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.131 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.132 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.132 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.133 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:07:09.132775) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.133 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.134 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.134 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.134 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.135 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.135 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.135 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:07:09.135476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.136 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.136 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.136 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.137 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.137 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.137 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.138 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes volume: 1582 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.138 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:07:09.137904) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.138 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.139 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.139 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.139 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.140 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.140 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.140 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.141 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:07:09.140506) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.141 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.141 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.142 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.142 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.142 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.143 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.143 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.143 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-05T21:07:09.143087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.144 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.145 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.145 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.145 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.146 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.147 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:07:09.146471) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.148 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.148 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.148 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.149 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.149 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.150 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.150 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.150 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.151 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.151 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.151 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.151 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:07:09.149031) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.153 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:07:09.151653) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.199 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.200 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.200 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.202 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.203 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.203 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.203 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:07:09.203043) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.204 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.204 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.204 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.205 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.205 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.205 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.205 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-05T21:07:09.205447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.206 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: test_0>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: test_0>]
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.206 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.207 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.207 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.207 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.207 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.208 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.208 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:07:09.207879) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.208 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.209 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.209 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.209 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.210 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.210 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.210 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:07:09.210426) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.258 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/memory.usage volume: 49.5390625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.259 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.259 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.260 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.260 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.260 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.261 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.261 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.262 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.262 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:07:09.261057) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.262 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.263 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.263 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.263 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.264 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.264 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.264 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:07:09.264086) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.264 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.265 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.266 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.266 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.266 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.267 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.267 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.268 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.268 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:07:09.267970) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.341 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.342 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.343 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.344 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.344 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.345 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.346 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.346 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.347 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.347 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes volume: 1884 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.347 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:07:09.347039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.349 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.349 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.350 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.350 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.351 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.351 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.352 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:07:09.351661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.352 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 488988741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.353 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 83667442 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.354 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 61020876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.355 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.355 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.356 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.357 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.357 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.358 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.358 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:07:09.358059) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.358 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.359 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.359 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.360 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.361 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.361 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.361 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.361 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.361 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.361 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.361 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.362 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.362 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.362 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.362 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.362 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.362 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.362 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.362 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 41697280 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.362 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.363 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.362 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:07:09.361329) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.363 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.363 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.363 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.363 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.363 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.363 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.363 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/cpu volume: 32010000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.364 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.364 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.364 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.364 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.364 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.364 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.364 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 1379105423 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.365 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 11839143 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.365 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.365 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.365 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.365 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.365 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.365 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.366 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.366 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 221 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.366 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.365 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:07:09.362658) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.366 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.366 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.366 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:07:09.363889) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:07:09.364650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.367 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:07:09.365968) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.367 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.368 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.369 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.369 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.369 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.369 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.369 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.370 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:07:09.371 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:07:10 compute-0 nova_compute[186018]: 2026-01-05 21:07:10.382 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:11 compute-0 nova_compute[186018]: 2026-01-05 21:07:11.807 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:13 compute-0 podman[240885]: 2026-01-05 21:07:13.759445407 +0000 UTC m=+0.103034654 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:07:14 compute-0 ovn_controller[98229]: 2026-01-05T21:07:14Z|00040|memory_trim|INFO|Detected inactivity (last active 30005 ms ago): trimming memory
Jan 05 21:07:15 compute-0 nova_compute[186018]: 2026-01-05 21:07:15.386 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:16 compute-0 podman[240909]: 2026-01-05 21:07:16.806890681 +0000 UTC m=+0.142603940 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 05 21:07:16 compute-0 nova_compute[186018]: 2026-01-05 21:07:16.810 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:19 compute-0 podman[240930]: 2026-01-05 21:07:19.765808346 +0000 UTC m=+0.116017046 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, name=ubi9, build-date=2024-09-18T21:23:30, container_name=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, io.buildah.version=1.29.0, vendor=Red Hat, Inc., architecture=x86_64, config_id=kepler, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, version=9.4)
Jan 05 21:07:20 compute-0 nova_compute[186018]: 2026-01-05 21:07:20.391 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:21 compute-0 podman[240949]: 2026-01-05 21:07:21.788742038 +0000 UTC m=+0.125800126 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.build-date=20251224, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4)
Jan 05 21:07:21 compute-0 nova_compute[186018]: 2026-01-05 21:07:21.816 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:25 compute-0 nova_compute[186018]: 2026-01-05 21:07:25.397 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:26 compute-0 nova_compute[186018]: 2026-01-05 21:07:26.821 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:28 compute-0 podman[240968]: 2026-01-05 21:07:28.808851569 +0000 UTC m=+0.123725241 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, container_name=openstack_network_exporter, name=ubi9-minimal, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible)
Jan 05 21:07:29 compute-0 podman[202426]: time="2026-01-05T21:07:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:07:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:07:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:07:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:07:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4353 "" "Go-http-client/1.1"
Jan 05 21:07:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:30.385 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=4, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=3) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:07:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:30.386 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:07:30 compute-0 nova_compute[186018]: 2026-01-05 21:07:30.386 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:30 compute-0 nova_compute[186018]: 2026-01-05 21:07:30.400 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:30 compute-0 podman[240988]: 2026-01-05 21:07:30.781613503 +0000 UTC m=+0.135443421 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:07:31 compute-0 openstack_network_exporter[205720]: ERROR   21:07:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:07:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:07:31 compute-0 openstack_network_exporter[205720]: ERROR   21:07:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:07:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:07:31 compute-0 nova_compute[186018]: 2026-01-05 21:07:31.825 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:34 compute-0 nova_compute[186018]: 2026-01-05 21:07:34.842 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "d0894ce8-3815-41f8-a495-2329081a9ed2" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:07:34 compute-0 nova_compute[186018]: 2026-01-05 21:07:34.844 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:07:34 compute-0 nova_compute[186018]: 2026-01-05 21:07:34.860 186022 DEBUG nova.compute.manager [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 21:07:34 compute-0 nova_compute[186018]: 2026-01-05 21:07:34.949 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:07:34 compute-0 nova_compute[186018]: 2026-01-05 21:07:34.950 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:07:34 compute-0 nova_compute[186018]: 2026-01-05 21:07:34.962 186022 DEBUG nova.virt.hardware [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 21:07:34 compute-0 nova_compute[186018]: 2026-01-05 21:07:34.963 186022 INFO nova.compute.claims [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Claim successful on node compute-0.ctlplane.example.com
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.104 186022 DEBUG nova.compute.provider_tree [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.119 186022 DEBUG nova.scheduler.client.report [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.142 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.192s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.143 186022 DEBUG nova.compute.manager [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.219 186022 DEBUG nova.compute.manager [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.221 186022 DEBUG nova.network.neutron [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.246 186022 INFO nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.296 186022 DEBUG nova.compute.manager [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.387 186022 DEBUG nova.compute.manager [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 21:07:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:35.389 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '4'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.390 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.391 186022 INFO nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Creating image(s)
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.392 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "/var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.396 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.398 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.427 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.430 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.530 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.531 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.532 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.542 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.635 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.637 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec,backing_fmt=raw /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.708 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec,backing_fmt=raw /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk 1073741824" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.710 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.710 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:07:35 compute-0 podman[241018]: 2026-01-05 21:07:35.74744636 +0000 UTC m=+0.085709377 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:07:35 compute-0 podman[241015]: 2026-01-05 21:07:35.782371883 +0000 UTC m=+0.122847868 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.783 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.784 186022 DEBUG nova.virt.disk.api [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Checking if we can resize image /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.785 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.866 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.867 186022 DEBUG nova.virt.disk.api [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Cannot resize image /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 21:07:35 compute-0 nova_compute[186018]: 2026-01-05 21:07:35.867 186022 DEBUG nova.objects.instance [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lazy-loading 'migration_context' on Instance uuid d0894ce8-3815-41f8-a495-2329081a9ed2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.106 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "/var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.107 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.111 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.141 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.221 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.223 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.225 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.251 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.329 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.331 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.405 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 1073741824" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.407 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.407 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.499 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.500 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.501 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Ensure instance console log exists: /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.502 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.502 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.503 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:07:36 compute-0 nova_compute[186018]: 2026-01-05 21:07:36.829 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:39 compute-0 nova_compute[186018]: 2026-01-05 21:07:39.889 186022 DEBUG nova.network.neutron [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Successfully updated port: f3274143-07c8-4956-b27c-98507a2443b2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 21:07:39 compute-0 nova_compute[186018]: 2026-01-05 21:07:39.903 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:07:39 compute-0 nova_compute[186018]: 2026-01-05 21:07:39.904 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquired lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:07:39 compute-0 nova_compute[186018]: 2026-01-05 21:07:39.904 186022 DEBUG nova.network.neutron [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 21:07:40 compute-0 nova_compute[186018]: 2026-01-05 21:07:40.001 186022 DEBUG nova.compute.manager [req-8e7b7977-1cdb-41ed-b7c3-246915417823 req-293e8624-c06c-482e-bc48-30393157c57b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Received event network-changed-f3274143-07c8-4956-b27c-98507a2443b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:07:40 compute-0 nova_compute[186018]: 2026-01-05 21:07:40.001 186022 DEBUG nova.compute.manager [req-8e7b7977-1cdb-41ed-b7c3-246915417823 req-293e8624-c06c-482e-bc48-30393157c57b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Refreshing instance network info cache due to event network-changed-f3274143-07c8-4956-b27c-98507a2443b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:07:40 compute-0 nova_compute[186018]: 2026-01-05 21:07:40.001 186022 DEBUG oslo_concurrency.lockutils [req-8e7b7977-1cdb-41ed-b7c3-246915417823 req-293e8624-c06c-482e-bc48-30393157c57b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:07:40 compute-0 nova_compute[186018]: 2026-01-05 21:07:40.051 186022 DEBUG nova.network.neutron [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 21:07:40 compute-0 nova_compute[186018]: 2026-01-05 21:07:40.430 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.351 186022 DEBUG nova.network.neutron [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Updating instance_info_cache with network_info: [{"id": "f3274143-07c8-4956-b27c-98507a2443b2", "address": "fa:16:3e:13:ee:71", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.216", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3274143-07", "ovs_interfaceid": "f3274143-07c8-4956-b27c-98507a2443b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.833 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.895 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Releasing lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.896 186022 DEBUG nova.compute.manager [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Instance network_info: |[{"id": "f3274143-07c8-4956-b27c-98507a2443b2", "address": "fa:16:3e:13:ee:71", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.216", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3274143-07", "ovs_interfaceid": "f3274143-07c8-4956-b27c-98507a2443b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.898 186022 DEBUG oslo_concurrency.lockutils [req-8e7b7977-1cdb-41ed-b7c3-246915417823 req-293e8624-c06c-482e-bc48-30393157c57b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.898 186022 DEBUG nova.network.neutron [req-8e7b7977-1cdb-41ed-b7c3-246915417823 req-293e8624-c06c-482e-bc48-30393157c57b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Refreshing network info cache for port f3274143-07c8-4956-b27c-98507a2443b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.905 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Start _get_guest_xml network_info=[{"id": "f3274143-07c8-4956-b27c-98507a2443b2", "address": "fa:16:3e:13:ee:71", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.216", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3274143-07", "ovs_interfaceid": "f3274143-07c8-4956-b27c-98507a2443b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T21:05:05Z,direct_url=<?>,disk_format='qcow2',id=31cf9c34-2e56-49e9-bb98-955ac3cf9185,min_disk=0,min_ram=0,name='cirros',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T21:05:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'encrypted': False, 'encryption_format': None, 'image_id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}], 'ephemerals': [{'guest_format': None, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 1, 'encrypted': False, 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.918 186022 WARNING nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.929 186022 DEBUG nova.virt.libvirt.host [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.930 186022 DEBUG nova.virt.libvirt.host [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.942 186022 DEBUG nova.virt.libvirt.host [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.944 186022 DEBUG nova.virt.libvirt.host [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.945 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.945 186022 DEBUG nova.virt.hardware [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T21:05:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='d9d5992a-1c00-4233-a43d-71321ed82446',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T21:05:05Z,direct_url=<?>,disk_format='qcow2',id=31cf9c34-2e56-49e9-bb98-955ac3cf9185,min_disk=0,min_ram=0,name='cirros',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T21:05:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.946 186022 DEBUG nova.virt.hardware [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.948 186022 DEBUG nova.virt.hardware [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.948 186022 DEBUG nova.virt.hardware [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.949 186022 DEBUG nova.virt.hardware [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.949 186022 DEBUG nova.virt.hardware [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.950 186022 DEBUG nova.virt.hardware [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.951 186022 DEBUG nova.virt.hardware [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.951 186022 DEBUG nova.virt.hardware [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.952 186022 DEBUG nova.virt.hardware [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.953 186022 DEBUG nova.virt.hardware [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.959 186022 DEBUG nova.virt.libvirt.vif [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc',id=2,image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a6371b97-6a0c-4b37-9443-eaf5410da4a4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='704814115a61471f9b45484171f67b5f',ramdisk_id='',reservation_id='r-aoba20n9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:07:35Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04NTE0MDUyNDkyNjkwODkyNTM1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg1MTQwNTI0OTI2OTA4OTI1MzU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODUxNDA1MjQ5MjY5MDg5MjUzNT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg1MTQwNTI0OTI2OTA4OTI1MzU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04NTE0MDUyNDkyNjkwODkyNTM1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04NTE0MDUyNDkyNjkwODkyNTM1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 05 21:07:41 compute-0 nova_compute[186018]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODUxNDA1MjQ5MjY5MDg5MjUzNT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg1MTQwNTI0OTI2OTA4OTI1MzU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04NTE0MDUyNDkyNjkwODkyNTM1PT0tLQo=',user_id='41f377b42540490198f271301cf5fe90',uuid=d0894ce8-3815-41f8-a495-2329081a9ed2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f3274143-07c8-4956-b27c-98507a2443b2", "address": "fa:16:3e:13:ee:71", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.216", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3274143-07", "ovs_interfaceid": "f3274143-07c8-4956-b27c-98507a2443b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.960 186022 DEBUG nova.network.os_vif_util [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converting VIF {"id": "f3274143-07c8-4956-b27c-98507a2443b2", "address": "fa:16:3e:13:ee:71", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.216", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3274143-07", "ovs_interfaceid": "f3274143-07c8-4956-b27c-98507a2443b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.963 186022 DEBUG nova.network.os_vif_util [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ee:71,bridge_name='br-int',has_traffic_filtering=True,id=f3274143-07c8-4956-b27c-98507a2443b2,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf3274143-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.965 186022 DEBUG nova.objects.instance [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lazy-loading 'pci_devices' on Instance uuid d0894ce8-3815-41f8-a495-2329081a9ed2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.990 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] End _get_guest_xml xml=<domain type="kvm">
Jan 05 21:07:41 compute-0 nova_compute[186018]:   <uuid>d0894ce8-3815-41f8-a495-2329081a9ed2</uuid>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   <name>instance-00000002</name>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   <memory>524288</memory>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   <vcpu>1</vcpu>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   <metadata>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <nova:name>vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc</nova:name>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <nova:creationTime>2026-01-05 21:07:41</nova:creationTime>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <nova:flavor name="m1.small">
Jan 05 21:07:41 compute-0 nova_compute[186018]:         <nova:memory>512</nova:memory>
Jan 05 21:07:41 compute-0 nova_compute[186018]:         <nova:disk>1</nova:disk>
Jan 05 21:07:41 compute-0 nova_compute[186018]:         <nova:swap>0</nova:swap>
Jan 05 21:07:41 compute-0 nova_compute[186018]:         <nova:ephemeral>1</nova:ephemeral>
Jan 05 21:07:41 compute-0 nova_compute[186018]:         <nova:vcpus>1</nova:vcpus>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       </nova:flavor>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <nova:owner>
Jan 05 21:07:41 compute-0 nova_compute[186018]:         <nova:user uuid="41f377b42540490198f271301cf5fe90">admin</nova:user>
Jan 05 21:07:41 compute-0 nova_compute[186018]:         <nova:project uuid="704814115a61471f9b45484171f67b5f">admin</nova:project>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       </nova:owner>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <nova:root type="image" uuid="31cf9c34-2e56-49e9-bb98-955ac3cf9185"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <nova:ports>
Jan 05 21:07:41 compute-0 nova_compute[186018]:         <nova:port uuid="f3274143-07c8-4956-b27c-98507a2443b2">
Jan 05 21:07:41 compute-0 nova_compute[186018]:           <nova:ip type="fixed" address="192.168.0.216" ipVersion="4"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:         </nova:port>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       </nova:ports>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     </nova:instance>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   </metadata>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   <sysinfo type="smbios">
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <system>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <entry name="manufacturer">RDO</entry>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <entry name="product">OpenStack Compute</entry>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <entry name="serial">d0894ce8-3815-41f8-a495-2329081a9ed2</entry>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <entry name="uuid">d0894ce8-3815-41f8-a495-2329081a9ed2</entry>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <entry name="family">Virtual Machine</entry>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     </system>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   </sysinfo>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   <os>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <boot dev="hd"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <smbios mode="sysinfo"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   </os>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   <features>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <acpi/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <apic/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <vmcoreinfo/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   </features>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   <clock offset="utc">
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <timer name="hpet" present="no"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   </clock>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   <cpu mode="host-model" match="exact">
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   </cpu>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   <devices>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <target dev="vda" bus="virtio"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <target dev="vdb" bus="virtio"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <disk type="file" device="cdrom">
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.config"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <target dev="sda" bus="sata"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <interface type="ethernet">
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <mac address="fa:16:3e:13:ee:71"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <mtu size="1442"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <target dev="tapf3274143-07"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     </interface>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <serial type="pty">
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <log file="/var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/console.log" append="off"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     </serial>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <video>
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     </video>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <input type="tablet" bus="usb"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <rng model="virtio">
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <backend model="random">/dev/urandom</backend>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     </rng>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <controller type="usb" index="0"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     <memballoon model="virtio">
Jan 05 21:07:41 compute-0 nova_compute[186018]:       <stats period="10"/>
Jan 05 21:07:41 compute-0 nova_compute[186018]:     </memballoon>
Jan 05 21:07:41 compute-0 nova_compute[186018]:   </devices>
Jan 05 21:07:41 compute-0 nova_compute[186018]: </domain>
Jan 05 21:07:41 compute-0 nova_compute[186018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.991 186022 DEBUG nova.compute.manager [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Preparing to wait for external event network-vif-plugged-f3274143-07c8-4956-b27c-98507a2443b2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.991 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.991 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.991 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.992 186022 DEBUG nova.virt.libvirt.vif [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc',id=2,image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a6371b97-6a0c-4b37-9443-eaf5410da4a4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='704814115a61471f9b45484171f67b5f',ramdisk_id='',reservation_id='r-aoba20n9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:07:35Z,user_data='Content-Type: multipart/mixed; boundary="===============8514052492690892535=="
MIME-Version: 1.0

--===============8514052492690892535==
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config"



# Capture all subprocess output into a logfile
# Useful for troubleshooting cloud-init issues
output: {all: '| tee -a /var/log/cloud-init-output.log'}

--===============8514052492690892535==
Content-Type: text/cloud-boothook; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="boothook.sh"

#!/usr/bin/bash

# FIXME(shadower) this is a workaround for cloud-init 0.6.3 present in Ubuntu
# 12.04 LTS:
# https://bugs.launchpad.net/heat/+bug/1257410
#
# The old cloud-init doesn't create the users directly so the commands to do
# this are injected though nova_utils.py.
#
# Once we drop support for 0.6.3, we can safely remove this.


# in case heat-cfntools has been installed from package but no symlinks
# are yet in /opt/aws/bin/
cfn-create-aws-symlinks

# Do not remove - the cloud boothook should always return success
exit 0

--===============8514052492690892535==
Content-Type: text/part-handler; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="part-handler.py"

# part-handler
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import os
import sys


def list_types():
    return ["text/x-cfninitdata"]


def handle_part(data, ctype, filename, payload):
    if ctype == "__begin__":
        try:
            os.makedirs('/var/lib/heat-cfntools', int("700", 8))
        except OSError:
            ex_type, e, tb = sys.exc_info()
            if e.errno != errno.EEXIST:
                raise
        return

    if ctype == "__end__":
        return

    timestamp = datetime.datetime.now()
    with open('/var/log/part-handler.log', 'a') as log:
        log.write('%s filename:%s, ctype:%s\n' % (timestamp, filename, ctype))

    if ctype == 'text/x-cfninitdata':
        with open('/var/lib/heat-cfntools/%s' % filename, 'w') as f:
            f.write(payload)

        # TODO(sdake) hopefully temporary until users move to heat-cfntools-1.3
        with open('/var/lib/cloud/data/%s' % filename, 'w') as f:
            f.write(payload)

--===============8514052492690892535==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-userdata"


--===============8514052492690892535==
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="loguserdata.py"

#!/usr/bin/env python3
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import logging
import os
import subprocess
import sys


VAR_PATH = '/var/lib/heat-cfntools'
LOG = logging.getLogger('heat-provision')


def init_logging():
    LOG.setLevel(logging.INFO)
    LOG.addHandler(logging.StreamHandler())
    fh = logging.FileHandler("/var/log/heat-provision.log")
    os.chmod(fh.baseFilename, int("600", 8))
    LOG.addHandler(fh)


def call(args):

    class LogStream(object):

        def write(self, data):
            LOG.info(data)

    LOG.info('%s\n', ' '.join(args))  # noqa
    try:
        ls = LogStream()
        p = subprocess.Popen(args, stdout=subprocess.PIPE,
                             stderr=subprocess.PIPE)
        data = p.communicate()
        if data:
            for x in data:
                ls.write(x)
    except OSError:
        ex_type, ex, tb = sys.exc_info()
        if ex.errno == errno.ENOEXEC:
            LOG.error('Userdata empty or not executable: %s', ex)
            return os.EX_OK
        else:
            LOG.error('OS error running userdata: %s', ex)
            return os.EX_OSERR
    except Exception:
        ex_type, ex, tb = sys.exc_info()
        LOG.error('Unknown error running userdata: %s', ex)
        return os.EX_SOFTWARE
    return p.returncode


def main():
    userdata_path = os.path.join(VAR_PATH, 'cfn-userdata')
    os.chmod(userdata_path, int("700", 8))

    LOG.info('Provision began: %s', datetime.datetime.now())
    returncode = call([userdata_path])
    LOG.info('Provision done: %s', datetime.datetime.now())
    if returncode:
        return returncode


if __name__ == '__main__':
    init_logging()

    code = main()
    if code:
        LOG.error('Provision failed with exit code %s', code)
        sys.exit(code)

    provision_log = os.path.join(VAR_PATH, 'provision-finished')
    # touch the file so it is timestamped with when finished
    with open(provision_log, 'a'):
        os.utime(provision_log, None)

--===============8514052492690892535==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-metadata-server"

https://heat-cfnapi-internal.openstack.svc:8000/v1/
--===============8514052492690892535==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-boto-cfg"

[Boto]
debug = 0
is_secure = 0
https_validate_certificates = 1
cfn_region_name = heat
cfn_region_endpoint = heat-cfnapi-internal.openstack.svc
--===============8514052492690892535==--
',user_id='41f377b42540490198f271301cf5fe90',uuid=d0894ce8-3815-41f8-a495-2329081a9ed2,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "f3274143-07c8-4956-b27c-98507a2443b2", "address": "fa:16:3e:13:ee:71", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.216", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3274143-07", "ovs_interfaceid": "f3274143-07c8-4956-b27c-98507a2443b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.992 186022 DEBUG nova.network.os_vif_util [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converting VIF {"id": "f3274143-07c8-4956-b27c-98507a2443b2", "address": "fa:16:3e:13:ee:71", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.216", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3274143-07", "ovs_interfaceid": "f3274143-07c8-4956-b27c-98507a2443b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.993 186022 DEBUG nova.network.os_vif_util [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:13:ee:71,bridge_name='br-int',has_traffic_filtering=True,id=f3274143-07c8-4956-b27c-98507a2443b2,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf3274143-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.993 186022 DEBUG os_vif [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ee:71,bridge_name='br-int',has_traffic_filtering=True,id=f3274143-07c8-4956-b27c-98507a2443b2,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf3274143-07') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.994 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.994 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.995 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.998 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.998 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapf3274143-07, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:07:41 compute-0 nova_compute[186018]: 2026-01-05 21:07:41.999 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapf3274143-07, col_values=(('external_ids', {'iface-id': 'f3274143-07c8-4956-b27c-98507a2443b2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:13:ee:71', 'vm-uuid': 'd0894ce8-3815-41f8-a495-2329081a9ed2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:07:42 compute-0 NetworkManager[56598]: <info>  [1767647262.0031] manager: (tapf3274143-07): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/27)
Jan 05 21:07:42 compute-0 nova_compute[186018]: 2026-01-05 21:07:42.004 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:07:42 compute-0 nova_compute[186018]: 2026-01-05 21:07:42.008 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:42 compute-0 nova_compute[186018]: 2026-01-05 21:07:42.009 186022 INFO os_vif [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:13:ee:71,bridge_name='br-int',has_traffic_filtering=True,id=f3274143-07c8-4956-b27c-98507a2443b2,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf3274143-07')
Jan 05 21:07:42 compute-0 nova_compute[186018]: 2026-01-05 21:07:42.061 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:07:42 compute-0 nova_compute[186018]: 2026-01-05 21:07:42.062 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:07:42 compute-0 nova_compute[186018]: 2026-01-05 21:07:42.062 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:07:42 compute-0 nova_compute[186018]: 2026-01-05 21:07:42.062 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No VIF found with MAC fa:16:3e:13:ee:71, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 21:07:42 compute-0 nova_compute[186018]: 2026-01-05 21:07:42.063 186022 INFO nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Using config drive
Jan 05 21:07:42 compute-0 rsyslogd[237695]: message too long (8192) with configured size 8096, begin of message is: 2026-01-05 21:07:41.959 186022 DEBUG nova.virt.libvirt.vif [None req-99d17749-aa [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:07:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:42.839 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:07:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:42.841 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:07:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:42.842 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:07:44 compute-0 nova_compute[186018]: 2026-01-05 21:07:44.031 186022 INFO nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Creating config drive at /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.config
Jan 05 21:07:44 compute-0 nova_compute[186018]: 2026-01-05 21:07:44.037 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp824tjy88 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:07:44 compute-0 nova_compute[186018]: 2026-01-05 21:07:44.182 186022 DEBUG oslo_concurrency.processutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp824tjy88" returned: 0 in 0.144s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:07:44 compute-0 kernel: tapf3274143-07: entered promiscuous mode
Jan 05 21:07:44 compute-0 NetworkManager[56598]: <info>  [1767647264.3053] manager: (tapf3274143-07): new Tun device (/org/freedesktop/NetworkManager/Devices/28)
Jan 05 21:07:44 compute-0 ovn_controller[98229]: 2026-01-05T21:07:44Z|00041|binding|INFO|Claiming lport f3274143-07c8-4956-b27c-98507a2443b2 for this chassis.
Jan 05 21:07:44 compute-0 ovn_controller[98229]: 2026-01-05T21:07:44Z|00042|binding|INFO|f3274143-07c8-4956-b27c-98507a2443b2: Claiming fa:16:3e:13:ee:71 192.168.0.216
Jan 05 21:07:44 compute-0 nova_compute[186018]: 2026-01-05 21:07:44.316 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:44 compute-0 ovn_controller[98229]: 2026-01-05T21:07:44Z|00043|binding|INFO|Setting lport f3274143-07c8-4956-b27c-98507a2443b2 ovn-installed in OVS
Jan 05 21:07:44 compute-0 nova_compute[186018]: 2026-01-05 21:07:44.364 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:44 compute-0 nova_compute[186018]: 2026-01-05 21:07:44.370 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:44 compute-0 systemd-udevd[241123]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:07:44 compute-0 systemd-machined[157312]: New machine qemu-2-instance-00000002.
Jan 05 21:07:44 compute-0 NetworkManager[56598]: <info>  [1767647264.4110] device (tapf3274143-07): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 21:07:44 compute-0 NetworkManager[56598]: <info>  [1767647264.4118] device (tapf3274143-07): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 21:07:44 compute-0 podman[241091]: 2026-01-05 21:07:44.413069305 +0000 UTC m=+0.131497836 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:07:44 compute-0 systemd[1]: Started Virtual Machine qemu-2-instance-00000002.
Jan 05 21:07:44 compute-0 nova_compute[186018]: 2026-01-05 21:07:44.987 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767647264.9857075, d0894ce8-3815-41f8-a495-2329081a9ed2 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:07:44 compute-0 nova_compute[186018]: 2026-01-05 21:07:44.988 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] VM Started (Lifecycle Event)
Jan 05 21:07:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:45.194 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:ee:71 192.168.0.216'], port_security=['fa:16:3e:13:ee:71 192.168.0.216'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-3m37qezpxu27-a47tklni2ayz-qhdfnok533vd-port-gbbzrm5s4gfv', 'neutron:cidrs': '192.168.0.216/24', 'neutron:device_id': 'd0894ce8-3815-41f8-a495-2329081a9ed2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-3m37qezpxu27-a47tklni2ayz-qhdfnok533vd-port-gbbzrm5s4gfv', 'neutron:project_id': '704814115a61471f9b45484171f67b5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '02c7eb5a-98f1-49fb-80bc-9ee05faa964b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.243'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0df9bc1d-5579-4059-ac66-a97b4c7350db, chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=f3274143-07c8-4956-b27c-98507a2443b2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:07:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:45.195 107689 INFO neutron.agent.ovn.metadata.agent [-] Port f3274143-07c8-4956-b27c-98507a2443b2 in datapath b871481f-0445-42f2-8b6a-2e8572ae5b49 bound to our chassis
Jan 05 21:07:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:45.197 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b871481f-0445-42f2-8b6a-2e8572ae5b49
Jan 05 21:07:45 compute-0 ovn_controller[98229]: 2026-01-05T21:07:45Z|00044|binding|INFO|Setting lport f3274143-07c8-4956-b27c-98507a2443b2 up in Southbound
Jan 05 21:07:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:45.229 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[03ba1bec-bbaf-4d53-a49c-50dc9db49068]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:07:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:45.282 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[47fa8415-e45c-4e22-b9dc-7dad68a3695d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:07:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:45.289 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[4969aaaf-af7a-4cb7-9d46-03bfbedce78e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:07:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:45.342 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[8507b29e-ad37-4c92-a7e0-38da727d5cc9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:07:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:45.384 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[c0ca1b11-7b8a-4e4c-af29-80913e534d4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb871481f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:f0:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 5, 'rx_bytes': 532, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393151, 'reachable_time': 17968, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 241146, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:07:45 compute-0 nova_compute[186018]: 2026-01-05 21:07:45.403 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:07:45 compute-0 nova_compute[186018]: 2026-01-05 21:07:45.411 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767647264.9864535, d0894ce8-3815-41f8-a495-2329081a9ed2 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:07:45 compute-0 nova_compute[186018]: 2026-01-05 21:07:45.412 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] VM Paused (Lifecycle Event)
Jan 05 21:07:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:45.414 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[282ad6bb-508a-4e37-b27e-1f168bf00ee0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb871481f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393170, 'tstamp': 393170}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241147, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapb871481f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393175, 'tstamp': 393175}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 241147, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:07:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:45.417 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb871481f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:07:45 compute-0 nova_compute[186018]: 2026-01-05 21:07:45.419 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:45.421 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb871481f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:07:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:45.422 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:07:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:45.422 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb871481f-00, col_values=(('external_ids', {'iface-id': 'a16ac18f-2e71-4427-b368-840ecfba3d33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:07:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:07:45.423 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:07:45 compute-0 nova_compute[186018]: 2026-01-05 21:07:45.430 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:07:45 compute-0 nova_compute[186018]: 2026-01-05 21:07:45.436 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:07:45 compute-0 nova_compute[186018]: 2026-01-05 21:07:45.466 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.202 186022 DEBUG nova.compute.manager [req-c94cb992-53eb-47f8-8e58-c39f8a15bbe0 req-9335d4f7-b668-42aa-9087-1a389d5aa025 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Received event network-vif-plugged-f3274143-07c8-4956-b27c-98507a2443b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.203 186022 DEBUG oslo_concurrency.lockutils [req-c94cb992-53eb-47f8-8e58-c39f8a15bbe0 req-9335d4f7-b668-42aa-9087-1a389d5aa025 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.204 186022 DEBUG oslo_concurrency.lockutils [req-c94cb992-53eb-47f8-8e58-c39f8a15bbe0 req-9335d4f7-b668-42aa-9087-1a389d5aa025 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.205 186022 DEBUG oslo_concurrency.lockutils [req-c94cb992-53eb-47f8-8e58-c39f8a15bbe0 req-9335d4f7-b668-42aa-9087-1a389d5aa025 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.206 186022 DEBUG nova.compute.manager [req-c94cb992-53eb-47f8-8e58-c39f8a15bbe0 req-9335d4f7-b668-42aa-9087-1a389d5aa025 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Processing event network-vif-plugged-f3274143-07c8-4956-b27c-98507a2443b2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.207 186022 DEBUG nova.compute.manager [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.224 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.226 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767647266.223045, d0894ce8-3815-41f8-a495-2329081a9ed2 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.226 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] VM Resumed (Lifecycle Event)
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.238 186022 INFO nova.virt.libvirt.driver [-] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Instance spawned successfully.
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.240 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.273 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.284 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.318 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.330 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.331 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.332 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.333 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.333 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.334 186022 DEBUG nova.virt.libvirt.driver [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.583 186022 INFO nova.compute.manager [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Took 11.19 seconds to spawn the instance on the hypervisor.
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.584 186022 DEBUG nova.compute.manager [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.686 186022 INFO nova.compute.manager [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Took 11.77 seconds to build instance.
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.720 186022 DEBUG oslo_concurrency.lockutils [None req-99d17749-aa26-4693-8f29-ccfb782ac90d 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 11.877s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.840 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.992 186022 DEBUG nova.network.neutron [req-8e7b7977-1cdb-41ed-b7c3-246915417823 req-293e8624-c06c-482e-bc48-30393157c57b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Updated VIF entry in instance network info cache for port f3274143-07c8-4956-b27c-98507a2443b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:07:46 compute-0 nova_compute[186018]: 2026-01-05 21:07:46.993 186022 DEBUG nova.network.neutron [req-8e7b7977-1cdb-41ed-b7c3-246915417823 req-293e8624-c06c-482e-bc48-30393157c57b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Updating instance_info_cache with network_info: [{"id": "f3274143-07c8-4956-b27c-98507a2443b2", "address": "fa:16:3e:13:ee:71", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.216", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3274143-07", "ovs_interfaceid": "f3274143-07c8-4956-b27c-98507a2443b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:07:47 compute-0 nova_compute[186018]: 2026-01-05 21:07:47.001 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:47 compute-0 nova_compute[186018]: 2026-01-05 21:07:47.008 186022 DEBUG oslo_concurrency.lockutils [req-8e7b7977-1cdb-41ed-b7c3-246915417823 req-293e8624-c06c-482e-bc48-30393157c57b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:07:47 compute-0 podman[241148]: 2026-01-05 21:07:47.76462005 +0000 UTC m=+0.113991514 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:07:48 compute-0 nova_compute[186018]: 2026-01-05 21:07:48.319 186022 DEBUG nova.compute.manager [req-c09222bb-2bea-43ff-b639-827d785ed1cf req-9f072042-04e0-47db-9fa9-bc9d3d0d2560 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Received event network-vif-plugged-f3274143-07c8-4956-b27c-98507a2443b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:07:48 compute-0 nova_compute[186018]: 2026-01-05 21:07:48.320 186022 DEBUG oslo_concurrency.lockutils [req-c09222bb-2bea-43ff-b639-827d785ed1cf req-9f072042-04e0-47db-9fa9-bc9d3d0d2560 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:07:48 compute-0 nova_compute[186018]: 2026-01-05 21:07:48.320 186022 DEBUG oslo_concurrency.lockutils [req-c09222bb-2bea-43ff-b639-827d785ed1cf req-9f072042-04e0-47db-9fa9-bc9d3d0d2560 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:07:48 compute-0 nova_compute[186018]: 2026-01-05 21:07:48.320 186022 DEBUG oslo_concurrency.lockutils [req-c09222bb-2bea-43ff-b639-827d785ed1cf req-9f072042-04e0-47db-9fa9-bc9d3d0d2560 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:07:48 compute-0 nova_compute[186018]: 2026-01-05 21:07:48.321 186022 DEBUG nova.compute.manager [req-c09222bb-2bea-43ff-b639-827d785ed1cf req-9f072042-04e0-47db-9fa9-bc9d3d0d2560 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] No waiting events found dispatching network-vif-plugged-f3274143-07c8-4956-b27c-98507a2443b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:07:48 compute-0 nova_compute[186018]: 2026-01-05 21:07:48.321 186022 WARNING nova.compute.manager [req-c09222bb-2bea-43ff-b639-827d785ed1cf req-9f072042-04e0-47db-9fa9-bc9d3d0d2560 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Received unexpected event network-vif-plugged-f3274143-07c8-4956-b27c-98507a2443b2 for instance with vm_state active and task_state None.
Jan 05 21:07:50 compute-0 podman[241169]: 2026-01-05 21:07:50.775984435 +0000 UTC m=+0.124277745 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, container_name=kepler, io.openshift.expose-services=, name=ubi9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.4, config_id=kepler)
Jan 05 21:07:51 compute-0 nova_compute[186018]: 2026-01-05 21:07:51.842 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:52 compute-0 nova_compute[186018]: 2026-01-05 21:07:52.005 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:52 compute-0 podman[241189]: 2026-01-05 21:07:52.787914127 +0000 UTC m=+0.126208677 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Jan 05 21:07:56 compute-0 nova_compute[186018]: 2026-01-05 21:07:56.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:07:56 compute-0 nova_compute[186018]: 2026-01-05 21:07:56.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:07:56 compute-0 nova_compute[186018]: 2026-01-05 21:07:56.846 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:57 compute-0 nova_compute[186018]: 2026-01-05 21:07:57.008 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:07:59 compute-0 nova_compute[186018]: 2026-01-05 21:07:59.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:07:59 compute-0 nova_compute[186018]: 2026-01-05 21:07:59.464 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:07:59 compute-0 nova_compute[186018]: 2026-01-05 21:07:59.465 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:07:59 compute-0 podman[202426]: time="2026-01-05T21:07:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:07:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:07:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:07:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:07:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4348 "" "Go-http-client/1.1"
Jan 05 21:07:59 compute-0 podman[241209]: 2026-01-05 21:07:59.815217766 +0000 UTC m=+0.171149734 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, release=1755695350, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, architecture=x86_64, name=ubi9-minimal, io.openshift.expose-services=, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:07:59 compute-0 nova_compute[186018]: 2026-01-05 21:07:59.831 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:07:59 compute-0 nova_compute[186018]: 2026-01-05 21:07:59.832 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:07:59 compute-0 nova_compute[186018]: 2026-01-05 21:07:59.832 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:07:59 compute-0 nova_compute[186018]: 2026-01-05 21:07:59.833 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f64de408-e6d1-4f7f-9f94-e20a4c83a87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:08:01 compute-0 nova_compute[186018]: 2026-01-05 21:08:01.317 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updating instance_info_cache with network_info: [{"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:08:01 compute-0 nova_compute[186018]: 2026-01-05 21:08:01.334 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:08:01 compute-0 nova_compute[186018]: 2026-01-05 21:08:01.334 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:08:01 compute-0 nova_compute[186018]: 2026-01-05 21:08:01.335 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:08:01 compute-0 openstack_network_exporter[205720]: ERROR   21:08:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:08:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:08:01 compute-0 openstack_network_exporter[205720]: ERROR   21:08:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:08:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:08:01 compute-0 podman[241228]: 2026-01-05 21:08:01.805310511 +0000 UTC m=+0.147625322 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 05 21:08:01 compute-0 nova_compute[186018]: 2026-01-05 21:08:01.848 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:02 compute-0 nova_compute[186018]: 2026-01-05 21:08:02.012 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:02 compute-0 nova_compute[186018]: 2026-01-05 21:08:02.328 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:08:02 compute-0 nova_compute[186018]: 2026-01-05 21:08:02.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:08:03 compute-0 nova_compute[186018]: 2026-01-05 21:08:03.459 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:08:03 compute-0 nova_compute[186018]: 2026-01-05 21:08:03.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:08:03 compute-0 nova_compute[186018]: 2026-01-05 21:08:03.506 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:08:03 compute-0 nova_compute[186018]: 2026-01-05 21:08:03.507 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:08:03 compute-0 nova_compute[186018]: 2026-01-05 21:08:03.508 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:08:03 compute-0 nova_compute[186018]: 2026-01-05 21:08:03.508 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:08:03 compute-0 nova_compute[186018]: 2026-01-05 21:08:03.606 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:08:03 compute-0 nova_compute[186018]: 2026-01-05 21:08:03.690 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:08:03 compute-0 nova_compute[186018]: 2026-01-05 21:08:03.692 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:08:03 compute-0 nova_compute[186018]: 2026-01-05 21:08:03.765 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:08:03 compute-0 nova_compute[186018]: 2026-01-05 21:08:03.767 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:08:03 compute-0 nova_compute[186018]: 2026-01-05 21:08:03.864 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:08:03 compute-0 nova_compute[186018]: 2026-01-05 21:08:03.866 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:08:03 compute-0 nova_compute[186018]: 2026-01-05 21:08:03.960 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:08:03 compute-0 nova_compute[186018]: 2026-01-05 21:08:03.972 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:08:04 compute-0 nova_compute[186018]: 2026-01-05 21:08:04.060 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:08:04 compute-0 nova_compute[186018]: 2026-01-05 21:08:04.061 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:08:04 compute-0 nova_compute[186018]: 2026-01-05 21:08:04.158 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:08:04 compute-0 nova_compute[186018]: 2026-01-05 21:08:04.159 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:08:04 compute-0 nova_compute[186018]: 2026-01-05 21:08:04.252 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:08:04 compute-0 nova_compute[186018]: 2026-01-05 21:08:04.254 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:08:04 compute-0 nova_compute[186018]: 2026-01-05 21:08:04.317 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:08:04 compute-0 nova_compute[186018]: 2026-01-05 21:08:04.973 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:08:04 compute-0 nova_compute[186018]: 2026-01-05 21:08:04.977 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5131MB free_disk=72.42190933227539GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:08:04 compute-0 nova_compute[186018]: 2026-01-05 21:08:04.978 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:08:04 compute-0 nova_compute[186018]: 2026-01-05 21:08:04.980 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:08:05 compute-0 nova_compute[186018]: 2026-01-05 21:08:05.081 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:08:05 compute-0 nova_compute[186018]: 2026-01-05 21:08:05.083 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance d0894ce8-3815-41f8-a495-2329081a9ed2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:08:05 compute-0 nova_compute[186018]: 2026-01-05 21:08:05.084 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:08:05 compute-0 nova_compute[186018]: 2026-01-05 21:08:05.085 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:08:05 compute-0 nova_compute[186018]: 2026-01-05 21:08:05.169 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:08:05 compute-0 nova_compute[186018]: 2026-01-05 21:08:05.192 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:08:05 compute-0 nova_compute[186018]: 2026-01-05 21:08:05.220 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:08:05 compute-0 nova_compute[186018]: 2026-01-05 21:08:05.221 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.242s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:08:06 compute-0 podman[241278]: 2026-01-05 21:08:06.794837105 +0000 UTC m=+0.125739444 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 05 21:08:06 compute-0 podman[241279]: 2026-01-05 21:08:06.840985735 +0000 UTC m=+0.161191171 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:08:06 compute-0 nova_compute[186018]: 2026-01-05 21:08:06.853 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:07 compute-0 nova_compute[186018]: 2026-01-05 21:08:07.016 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:07 compute-0 nova_compute[186018]: 2026-01-05 21:08:07.224 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:08:07 compute-0 nova_compute[186018]: 2026-01-05 21:08:07.224 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:08:08 compute-0 nova_compute[186018]: 2026-01-05 21:08:08.456 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:08:11 compute-0 nova_compute[186018]: 2026-01-05 21:08:11.856 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:12 compute-0 nova_compute[186018]: 2026-01-05 21:08:12.019 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:14 compute-0 podman[241315]: 2026-01-05 21:08:14.800860371 +0000 UTC m=+0.138431459 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:08:15 compute-0 ovn_controller[98229]: 2026-01-05T21:08:15Z|00045|memory_trim|INFO|Detected inactivity (last active 30001 ms ago): trimming memory
Jan 05 21:08:16 compute-0 nova_compute[186018]: 2026-01-05 21:08:16.860 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:17 compute-0 nova_compute[186018]: 2026-01-05 21:08:17.024 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:18 compute-0 podman[241337]: 2026-01-05 21:08:18.797376349 +0000 UTC m=+0.131274051 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 05 21:08:20 compute-0 ovn_controller[98229]: 2026-01-05T21:08:20Z|00006|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:13:ee:71 192.168.0.216
Jan 05 21:08:20 compute-0 ovn_controller[98229]: 2026-01-05T21:08:20Z|00007|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:13:ee:71 192.168.0.216
Jan 05 21:08:21 compute-0 podman[241369]: 2026-01-05 21:08:21.818117621 +0000 UTC m=+0.155329166 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release=1214.1726694543, vcs-type=git, container_name=kepler, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, io.openshift.tags=base rhel9, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, maintainer=Red Hat, Inc., name=ubi9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release-0.7.12=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler)
Jan 05 21:08:21 compute-0 nova_compute[186018]: 2026-01-05 21:08:21.861 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:22 compute-0 nova_compute[186018]: 2026-01-05 21:08:22.028 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:23 compute-0 podman[241388]: 2026-01-05 21:08:23.80313577 +0000 UTC m=+0.136297925 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0)
Jan 05 21:08:26 compute-0 nova_compute[186018]: 2026-01-05 21:08:26.864 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:27 compute-0 nova_compute[186018]: 2026-01-05 21:08:27.030 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:29 compute-0 podman[202426]: time="2026-01-05T21:08:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:08:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:08:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:08:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:08:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4349 "" "Go-http-client/1.1"
Jan 05 21:08:30 compute-0 podman[241407]: 2026-01-05 21:08:30.789520687 +0000 UTC m=+0.127961736 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, distribution-scope=public)
Jan 05 21:08:31 compute-0 openstack_network_exporter[205720]: ERROR   21:08:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:08:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:08:31 compute-0 openstack_network_exporter[205720]: ERROR   21:08:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:08:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:08:31 compute-0 nova_compute[186018]: 2026-01-05 21:08:31.867 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:32 compute-0 nova_compute[186018]: 2026-01-05 21:08:32.033 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:32 compute-0 podman[241428]: 2026-01-05 21:08:32.785426233 +0000 UTC m=+0.136863790 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 21:08:36 compute-0 nova_compute[186018]: 2026-01-05 21:08:36.870 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:36 compute-0 podman[241454]: 2026-01-05 21:08:36.968628926 +0000 UTC m=+0.070063633 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:08:36 compute-0 podman[241455]: 2026-01-05 21:08:36.989951627 +0000 UTC m=+0.085314305 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:08:37 compute-0 nova_compute[186018]: 2026-01-05 21:08:37.036 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:41 compute-0 nova_compute[186018]: 2026-01-05 21:08:41.876 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:42 compute-0 nova_compute[186018]: 2026-01-05 21:08:42.038 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:42 compute-0 sshd-session[241496]: Invalid user  from 64.62.197.202 port 33651
Jan 05 21:08:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:08:42.841 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:08:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:08:42.842 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:08:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:08:42.843 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:08:45 compute-0 podman[241498]: 2026-01-05 21:08:45.78800753 +0000 UTC m=+0.126956070 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:08:45 compute-0 sshd-session[241496]: Connection closed by invalid user  64.62.197.202 port 33651 [preauth]
Jan 05 21:08:46 compute-0 nova_compute[186018]: 2026-01-05 21:08:46.878 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:47 compute-0 nova_compute[186018]: 2026-01-05 21:08:47.041 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:49 compute-0 podman[241523]: 2026-01-05 21:08:49.790315401 +0000 UTC m=+0.121624679 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202)
Jan 05 21:08:51 compute-0 nova_compute[186018]: 2026-01-05 21:08:51.880 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:52 compute-0 nova_compute[186018]: 2026-01-05 21:08:52.044 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:52 compute-0 podman[241543]: 2026-01-05 21:08:52.788899854 +0000 UTC m=+0.126698402 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, container_name=kepler, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=kepler, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 05 21:08:54 compute-0 podman[241562]: 2026-01-05 21:08:54.785061298 +0000 UTC m=+0.126624981 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2)
Jan 05 21:08:56 compute-0 nova_compute[186018]: 2026-01-05 21:08:56.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:08:56 compute-0 nova_compute[186018]: 2026-01-05 21:08:56.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:08:56 compute-0 nova_compute[186018]: 2026-01-05 21:08:56.884 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:57 compute-0 nova_compute[186018]: 2026-01-05 21:08:57.049 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:08:59 compute-0 nova_compute[186018]: 2026-01-05 21:08:59.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:08:59 compute-0 podman[202426]: time="2026-01-05T21:08:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:08:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:08:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:08:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:08:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4353 "" "Go-http-client/1.1"
Jan 05 21:09:01 compute-0 openstack_network_exporter[205720]: ERROR   21:09:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:09:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:09:01 compute-0 openstack_network_exporter[205720]: ERROR   21:09:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:09:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:09:01 compute-0 nova_compute[186018]: 2026-01-05 21:09:01.456 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:09:01 compute-0 nova_compute[186018]: 2026-01-05 21:09:01.459 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:09:01 compute-0 nova_compute[186018]: 2026-01-05 21:09:01.460 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:09:01 compute-0 podman[241582]: 2026-01-05 21:09:01.805043272 +0000 UTC m=+0.145627220 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., vcs-type=git, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_id=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, version=9.6)
Jan 05 21:09:01 compute-0 nova_compute[186018]: 2026-01-05 21:09:01.889 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:02 compute-0 nova_compute[186018]: 2026-01-05 21:09:02.054 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:02 compute-0 nova_compute[186018]: 2026-01-05 21:09:02.066 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:09:02 compute-0 nova_compute[186018]: 2026-01-05 21:09:02.066 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:09:02 compute-0 nova_compute[186018]: 2026-01-05 21:09:02.067 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:09:03 compute-0 nova_compute[186018]: 2026-01-05 21:09:03.502 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Updating instance_info_cache with network_info: [{"id": "f3274143-07c8-4956-b27c-98507a2443b2", "address": "fa:16:3e:13:ee:71", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.216", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3274143-07", "ovs_interfaceid": "f3274143-07c8-4956-b27c-98507a2443b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:09:03 compute-0 nova_compute[186018]: 2026-01-05 21:09:03.526 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:09:03 compute-0 nova_compute[186018]: 2026-01-05 21:09:03.527 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:09:03 compute-0 nova_compute[186018]: 2026-01-05 21:09:03.528 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:09:03 compute-0 podman[241600]: 2026-01-05 21:09:03.851707013 +0000 UTC m=+0.184308658 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 05 21:09:04 compute-0 nova_compute[186018]: 2026-01-05 21:09:04.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:09:04 compute-0 nova_compute[186018]: 2026-01-05 21:09:04.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:09:04 compute-0 nova_compute[186018]: 2026-01-05 21:09:04.488 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:09:04 compute-0 nova_compute[186018]: 2026-01-05 21:09:04.489 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:09:04 compute-0 nova_compute[186018]: 2026-01-05 21:09:04.489 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:09:04 compute-0 nova_compute[186018]: 2026-01-05 21:09:04.489 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:09:04 compute-0 nova_compute[186018]: 2026-01-05 21:09:04.601 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:09:04 compute-0 nova_compute[186018]: 2026-01-05 21:09:04.697 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:09:04 compute-0 nova_compute[186018]: 2026-01-05 21:09:04.699 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:09:04 compute-0 nova_compute[186018]: 2026-01-05 21:09:04.791 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:09:04 compute-0 nova_compute[186018]: 2026-01-05 21:09:04.793 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:09:04 compute-0 nova_compute[186018]: 2026-01-05 21:09:04.856 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:09:04 compute-0 nova_compute[186018]: 2026-01-05 21:09:04.858 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:09:04 compute-0 nova_compute[186018]: 2026-01-05 21:09:04.957 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:09:04 compute-0 nova_compute[186018]: 2026-01-05 21:09:04.966 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.066 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.067 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.165 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.166 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.240 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.244 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.306 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.702 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.703 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5047MB free_disk=72.4006118774414GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.703 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.704 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.807 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.808 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance d0894ce8-3815-41f8-a495-2329081a9ed2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.808 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.809 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.873 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.898 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.901 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:09:05 compute-0 nova_compute[186018]: 2026-01-05 21:09:05.901 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:09:06 compute-0 nova_compute[186018]: 2026-01-05 21:09:06.893 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:06 compute-0 nova_compute[186018]: 2026-01-05 21:09:06.901 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:09:07 compute-0 nova_compute[186018]: 2026-01-05 21:09:07.057 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:07 compute-0 nova_compute[186018]: 2026-01-05 21:09:07.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:09:07 compute-0 podman[241650]: 2026-01-05 21:09:07.775951909 +0000 UTC m=+0.106327737 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.779 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.779 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.780 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.780 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.783 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.797 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance d0894ce8-3815-41f8-a495-2329081a9ed2 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 05 21:09:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:07.799 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/d0894ce8-3815-41f8-a495-2329081a9ed2 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f276ecb8e60cef1797549a0d2bcc21ef3546f9ad65f5da0e31c0a93bf2cbb910" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 05 21:09:07 compute-0 podman[241651]: 2026-01-05 21:09:07.810793156 +0000 UTC m=+0.133782449 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.010 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1960 Content-Type: application/json Date: Mon, 05 Jan 2026 21:09:07 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-e06d27d9-74d8-4486-8c4a-339fdf8f5a21 x-openstack-request-id: req-e06d27d9-74d8-4486-8c4a-339fdf8f5a21 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.011 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "d0894ce8-3815-41f8-a495-2329081a9ed2", "name": "vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc", "status": "ACTIVE", "tenant_id": "704814115a61471f9b45484171f67b5f", "user_id": "41f377b42540490198f271301cf5fe90", "metadata": {"metering.server_group": "a6371b97-6a0c-4b37-9443-eaf5410da4a4"}, "hostId": "cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424", "image": {"id": "31cf9c34-2e56-49e9-bb98-955ac3cf9185", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/31cf9c34-2e56-49e9-bb98-955ac3cf9185"}]}, "flavor": {"id": "d9d5992a-1c00-4233-a43d-71321ed82446", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/d9d5992a-1c00-4233-a43d-71321ed82446"}]}, "created": "2026-01-05T21:07:33Z", "updated": "2026-01-05T21:07:46Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.216", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:13:ee:71"}, {"version": 4, "addr": "192.168.122.243", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:13:ee:71"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/d0894ce8-3815-41f8-a495-2329081a9ed2"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/d0894ce8-3815-41f8-a495-2329081a9ed2"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-05T21:07:46.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000002", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.011 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/d0894ce8-3815-41f8-a495-2329081a9ed2 used request id req-e06d27d9-74d8-4486-8c4a-339fdf8f5a21 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.012 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd0894ce8-3815-41f8-a495-2329081a9ed2', 'name': 'vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {'metering.server_group': 'a6371b97-6a0c-4b37-9443-eaf5410da4a4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.017 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f64de408-e6d1-4f7f-9f94-e20a4c83a87a', 'name': 'test_0', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.017 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.017 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.017 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.017 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.018 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.019 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.019 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.019 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.019 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.019 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.021 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:09:10.017746) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.023 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:09:10.019603) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.024 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for d0894ce8-3815-41f8-a495-2329081a9ed2 / tapf3274143-07 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.024 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.029 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.029 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.029 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.029 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.030 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.030 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.030 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.030 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.030 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.031 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.031 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.031 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.031 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.031 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.031 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.031 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.packets volume: 38 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.032 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.032 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.032 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.033 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.033 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.033 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.033 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.033 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.034 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.034 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.034 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.034 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.034 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.034 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.bytes volume: 4578 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.035 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes volume: 2202 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.035 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.035 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:09:10.030276) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.035 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.036 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:09:10.031819) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.036 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.036 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:09:10.033389) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.036 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.036 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:09:10.034593) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.036 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes.delta volume: 620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.037 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.037 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.037 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.037 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.037 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.037 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.037 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc>]
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:09:10.036198) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.038 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.038 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-05T21:09:10.037727) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.038 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.038 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.039 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.039 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:09:10.038971) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.040 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.040 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.040 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.040 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.040 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.040 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.041 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.041 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.041 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.042 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.042 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.042 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:09:10.040662) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.042 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.042 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.043 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:09:10.042386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.077 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.077 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.078 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.114 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.115 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.115 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.116 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.116 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.117 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.117 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.117 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.117 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.118 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.118 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:09:10.117649) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.118 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.119 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.119 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.119 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.120 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.120 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.120 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.120 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-05T21:09:10.120497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.121 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc>]
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.122 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.122 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.122 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.122 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.123 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.123 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.124 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.125 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.125 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:09:10.122882) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.125 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.126 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.126 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.126 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.127 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:09:10.126529) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.162 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/memory.usage volume: 49.1328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.197 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.198 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.198 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.199 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.199 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.199 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.199 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.200 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.200 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.201 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.201 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.202 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.202 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.202 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:09:10.199797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.202 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.203 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.203 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.203 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.204 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:09:10.203035) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.204 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.205 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.205 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.206 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.207 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.207 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.207 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.207 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.207 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.207 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.209 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:09:10.207887) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.299 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.300 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.301 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.429 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.430 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.430 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.431 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.432 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.433 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.bytes volume: 4807 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:09:10.432866) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.434 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.434 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.435 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.435 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.435 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.435 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.436 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.latency volume: 441838413 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.436 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.latency volume: 97302278 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.437 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.latency volume: 82890817 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.438 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 488988741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:09:10.435820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.438 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 83667442 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.439 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 61020876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.440 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.440 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.441 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.441 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.441 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.441 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.442 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.442 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:09:10.441770) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.443 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.443 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.444 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.444 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.445 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.446 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.446 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.446 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.447 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.447 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.447 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.447 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.448 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:09:10.447380) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.448 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.449 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.450 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.450 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.451 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.451 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.452 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.452 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.452 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.452 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.453 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.453 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:09:10.453008) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.453 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.bytes volume: 41824256 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.454 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.454 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.455 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.455 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.456 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.457 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.457 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.457 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.458 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.458 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.458 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.458 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/cpu volume: 44560000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.459 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/cpu volume: 34130000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.459 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:09:10.458477) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.460 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.460 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.461 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.461 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.461 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.461 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.462 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.latency volume: 1648900766 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.462 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:09:10.461695) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.462 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.latency volume: 11989637 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.463 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.463 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 1391100422 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.464 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 11839143 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.464 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.465 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.465 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.465 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.465 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.466 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.466 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.466 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.467 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:09:10.466386) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.467 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.467 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.468 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.468 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.469 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.470 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.470 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.471 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.472 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.473 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.474 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.475 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.475 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.475 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.475 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.476 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.476 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.476 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.476 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:10 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:09:10.476 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:09:11 compute-0 nova_compute[186018]: 2026-01-05 21:09:11.896 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:12 compute-0 nova_compute[186018]: 2026-01-05 21:09:12.060 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:16 compute-0 podman[241689]: 2026-01-05 21:09:16.805896531 +0000 UTC m=+0.142880928 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:09:16 compute-0 nova_compute[186018]: 2026-01-05 21:09:16.900 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:17 compute-0 nova_compute[186018]: 2026-01-05 21:09:17.063 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:20 compute-0 podman[241714]: 2026-01-05 21:09:20.821279094 +0000 UTC m=+0.161318163 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 05 21:09:21 compute-0 nova_compute[186018]: 2026-01-05 21:09:21.904 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:22 compute-0 nova_compute[186018]: 2026-01-05 21:09:22.067 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:23 compute-0 podman[241734]: 2026-01-05 21:09:23.790489536 +0000 UTC m=+0.121284241 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, distribution-scope=public, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, config_id=kepler, io.openshift.tags=base rhel9, container_name=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543)
Jan 05 21:09:25 compute-0 podman[241754]: 2026-01-05 21:09:25.781549934 +0000 UTC m=+0.135149245 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0)
Jan 05 21:09:26 compute-0 nova_compute[186018]: 2026-01-05 21:09:26.907 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:27 compute-0 nova_compute[186018]: 2026-01-05 21:09:27.070 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:29 compute-0 podman[202426]: time="2026-01-05T21:09:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:09:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:09:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:09:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:09:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4342 "" "Go-http-client/1.1"
Jan 05 21:09:31 compute-0 openstack_network_exporter[205720]: ERROR   21:09:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:09:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:09:31 compute-0 openstack_network_exporter[205720]: ERROR   21:09:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:09:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:09:31 compute-0 nova_compute[186018]: 2026-01-05 21:09:31.909 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:32 compute-0 nova_compute[186018]: 2026-01-05 21:09:32.073 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:32 compute-0 podman[241774]: 2026-01-05 21:09:32.799025433 +0000 UTC m=+0.128968622 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, name=ubi9-minimal, version=9.6, maintainer=Red Hat, Inc., release=1755695350, config_id=openstack_network_exporter, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git)
Jan 05 21:09:34 compute-0 podman[241796]: 2026-01-05 21:09:34.833716721 +0000 UTC m=+0.170061284 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 05 21:09:36 compute-0 nova_compute[186018]: 2026-01-05 21:09:36.914 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:37 compute-0 nova_compute[186018]: 2026-01-05 21:09:37.077 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:38 compute-0 podman[241830]: 2026-01-05 21:09:38.781995019 +0000 UTC m=+0.097953047 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:09:38 compute-0 podman[241829]: 2026-01-05 21:09:38.78279341 +0000 UTC m=+0.111927404 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 05 21:09:41 compute-0 nova_compute[186018]: 2026-01-05 21:09:41.921 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:42 compute-0 nova_compute[186018]: 2026-01-05 21:09:42.081 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:09:42.843 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:09:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:09:42.844 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:09:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:09:42.845 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:09:46 compute-0 nova_compute[186018]: 2026-01-05 21:09:46.925 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:47 compute-0 nova_compute[186018]: 2026-01-05 21:09:47.084 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:47 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 05 21:09:47 compute-0 podman[241869]: 2026-01-05 21:09:47.659679826 +0000 UTC m=+0.121358722 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:09:51 compute-0 podman[241893]: 2026-01-05 21:09:51.775758787 +0000 UTC m=+0.105692250 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 05 21:09:51 compute-0 nova_compute[186018]: 2026-01-05 21:09:51.929 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:52 compute-0 nova_compute[186018]: 2026-01-05 21:09:52.832 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:54 compute-0 podman[241913]: 2026-01-05 21:09:54.798682051 +0000 UTC m=+0.135175386 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, version=9.4, com.redhat.component=ubi9-container, vcs-type=git, vendor=Red Hat, Inc., name=ubi9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, architecture=x86_64, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, container_name=kepler, io.openshift.tags=base rhel9)
Jan 05 21:09:56 compute-0 podman[241934]: 2026-01-05 21:09:56.79848155 +0000 UTC m=+0.131529739 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 05 21:09:56 compute-0 nova_compute[186018]: 2026-01-05 21:09:56.931 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:57 compute-0 nova_compute[186018]: 2026-01-05 21:09:57.836 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:09:58 compute-0 nova_compute[186018]: 2026-01-05 21:09:58.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:09:58 compute-0 nova_compute[186018]: 2026-01-05 21:09:58.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:09:59 compute-0 podman[202426]: time="2026-01-05T21:09:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:09:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:09:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:09:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:09:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4355 "" "Go-http-client/1.1"
Jan 05 21:10:01 compute-0 openstack_network_exporter[205720]: ERROR   21:10:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:10:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:10:01 compute-0 openstack_network_exporter[205720]: ERROR   21:10:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:10:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:10:01 compute-0 nova_compute[186018]: 2026-01-05 21:10:01.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:10:01 compute-0 nova_compute[186018]: 2026-01-05 21:10:01.933 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:02 compute-0 nova_compute[186018]: 2026-01-05 21:10:02.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:10:02 compute-0 nova_compute[186018]: 2026-01-05 21:10:02.463 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:10:02 compute-0 nova_compute[186018]: 2026-01-05 21:10:02.464 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:10:02 compute-0 nova_compute[186018]: 2026-01-05 21:10:02.839 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:03 compute-0 nova_compute[186018]: 2026-01-05 21:10:03.175 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:10:03 compute-0 nova_compute[186018]: 2026-01-05 21:10:03.176 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:10:03 compute-0 nova_compute[186018]: 2026-01-05 21:10:03.177 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:10:03 compute-0 nova_compute[186018]: 2026-01-05 21:10:03.177 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f64de408-e6d1-4f7f-9f94-e20a4c83a87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:10:03 compute-0 podman[241953]: 2026-01-05 21:10:03.743628678 +0000 UTC m=+0.094409584 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, name=ubi9-minimal, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, managed_by=edpm_ansible, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.230 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updating instance_info_cache with network_info: [{"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.250 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.251 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.252 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.253 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.283 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.284 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.284 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.285 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.395 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:10:05 compute-0 podman[241974]: 2026-01-05 21:10:05.41616986 +0000 UTC m=+0.152669596 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller)
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.483 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.487 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.555 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.557 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.613 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.615 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.710 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.718 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.797 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.799 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.897 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.898 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.980 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:10:05 compute-0 nova_compute[186018]: 2026-01-05 21:10:05.983 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:10:06 compute-0 nova_compute[186018]: 2026-01-05 21:10:06.064 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:10:06 compute-0 nova_compute[186018]: 2026-01-05 21:10:06.458 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:10:06 compute-0 nova_compute[186018]: 2026-01-05 21:10:06.460 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5035MB free_disk=72.4006118774414GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:10:06 compute-0 nova_compute[186018]: 2026-01-05 21:10:06.461 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:10:06 compute-0 nova_compute[186018]: 2026-01-05 21:10:06.462 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:10:06 compute-0 nova_compute[186018]: 2026-01-05 21:10:06.544 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:10:06 compute-0 nova_compute[186018]: 2026-01-05 21:10:06.545 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance d0894ce8-3815-41f8-a495-2329081a9ed2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:10:06 compute-0 nova_compute[186018]: 2026-01-05 21:10:06.545 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:10:06 compute-0 nova_compute[186018]: 2026-01-05 21:10:06.546 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:10:06 compute-0 nova_compute[186018]: 2026-01-05 21:10:06.616 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:10:06 compute-0 nova_compute[186018]: 2026-01-05 21:10:06.632 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:10:06 compute-0 nova_compute[186018]: 2026-01-05 21:10:06.635 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:10:06 compute-0 nova_compute[186018]: 2026-01-05 21:10:06.635 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.174s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:10:06 compute-0 nova_compute[186018]: 2026-01-05 21:10:06.844 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:10:06 compute-0 nova_compute[186018]: 2026-01-05 21:10:06.846 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:10:06 compute-0 nova_compute[186018]: 2026-01-05 21:10:06.935 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:07 compute-0 nova_compute[186018]: 2026-01-05 21:10:07.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:10:07 compute-0 nova_compute[186018]: 2026-01-05 21:10:07.843 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:08 compute-0 nova_compute[186018]: 2026-01-05 21:10:08.458 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:10:09 compute-0 nova_compute[186018]: 2026-01-05 21:10:09.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:10:09 compute-0 podman[242021]: 2026-01-05 21:10:09.785004638 +0000 UTC m=+0.115532809 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 21:10:09 compute-0 podman[242022]: 2026-01-05 21:10:09.796912681 +0000 UTC m=+0.119541795 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:10:11 compute-0 nova_compute[186018]: 2026-01-05 21:10:11.937 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:12 compute-0 nova_compute[186018]: 2026-01-05 21:10:12.848 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:16 compute-0 nova_compute[186018]: 2026-01-05 21:10:16.940 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:17 compute-0 nova_compute[186018]: 2026-01-05 21:10:17.853 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:18 compute-0 podman[242059]: 2026-01-05 21:10:18.758852162 +0000 UTC m=+0.102999978 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:10:21 compute-0 nova_compute[186018]: 2026-01-05 21:10:21.946 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:22 compute-0 podman[242082]: 2026-01-05 21:10:22.773986839 +0000 UTC m=+0.108572977 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 21:10:22 compute-0 nova_compute[186018]: 2026-01-05 21:10:22.857 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:25 compute-0 podman[242101]: 2026-01-05 21:10:25.789065367 +0000 UTC m=+0.122723128 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, distribution-scope=public, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, release-0.7.12=, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., config_id=kepler, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, com.redhat.component=ubi9-container, io.openshift.tags=base rhel9, name=ubi9, version=9.4)
Jan 05 21:10:26 compute-0 nova_compute[186018]: 2026-01-05 21:10:26.948 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:27 compute-0 podman[242120]: 2026-01-05 21:10:27.790426597 +0000 UTC m=+0.130450242 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 05 21:10:27 compute-0 nova_compute[186018]: 2026-01-05 21:10:27.860 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:29 compute-0 podman[202426]: time="2026-01-05T21:10:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:10:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:10:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:10:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:10:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4354 "" "Go-http-client/1.1"
Jan 05 21:10:31 compute-0 openstack_network_exporter[205720]: ERROR   21:10:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:10:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:10:31 compute-0 openstack_network_exporter[205720]: ERROR   21:10:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:10:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:10:31 compute-0 nova_compute[186018]: 2026-01-05 21:10:31.953 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:32 compute-0 nova_compute[186018]: 2026-01-05 21:10:32.864 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:34 compute-0 podman[242140]: 2026-01-05 21:10:34.775488714 +0000 UTC m=+0.110741221 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, name=ubi9-minimal, release=1755695350, vcs-type=git, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, architecture=x86_64, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7)
Jan 05 21:10:35 compute-0 podman[242160]: 2026-01-05 21:10:35.818047553 +0000 UTC m=+0.160657450 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 05 21:10:36 compute-0 nova_compute[186018]: 2026-01-05 21:10:36.956 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:37 compute-0 nova_compute[186018]: 2026-01-05 21:10:37.868 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:40 compute-0 podman[242185]: 2026-01-05 21:10:40.770512192 +0000 UTC m=+0.116624105 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:10:40 compute-0 podman[242186]: 2026-01-05 21:10:40.793502745 +0000 UTC m=+0.120053926 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:10:41 compute-0 nova_compute[186018]: 2026-01-05 21:10:41.957 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:10:42.843 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:10:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:10:42.844 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:10:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:10:42.844 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:10:42 compute-0 nova_compute[186018]: 2026-01-05 21:10:42.874 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:46 compute-0 nova_compute[186018]: 2026-01-05 21:10:46.960 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:47 compute-0 nova_compute[186018]: 2026-01-05 21:10:47.880 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:49 compute-0 podman[242227]: 2026-01-05 21:10:49.76095017 +0000 UTC m=+0.113408222 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:10:51 compute-0 nova_compute[186018]: 2026-01-05 21:10:51.967 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:52 compute-0 nova_compute[186018]: 2026-01-05 21:10:52.885 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:53 compute-0 podman[242252]: 2026-01-05 21:10:53.767864803 +0000 UTC m=+0.107542818 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:10:56 compute-0 podman[242272]: 2026-01-05 21:10:56.75174918 +0000 UTC m=+0.106452039 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, name=ubi9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, distribution-scope=public, io.openshift.tags=base rhel9, config_id=kepler, release=1214.1726694543, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, managed_by=edpm_ansible, architecture=x86_64, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:10:56 compute-0 nova_compute[186018]: 2026-01-05 21:10:56.969 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:57 compute-0 nova_compute[186018]: 2026-01-05 21:10:57.889 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:10:58 compute-0 nova_compute[186018]: 2026-01-05 21:10:58.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:10:58 compute-0 nova_compute[186018]: 2026-01-05 21:10:58.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:10:58 compute-0 podman[242290]: 2026-01-05 21:10:58.787471391 +0000 UTC m=+0.137679517 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Jan 05 21:10:59 compute-0 podman[202426]: time="2026-01-05T21:10:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:10:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:10:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:10:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:10:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4356 "" "Go-http-client/1.1"
Jan 05 21:11:01 compute-0 openstack_network_exporter[205720]: ERROR   21:11:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:11:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:11:01 compute-0 openstack_network_exporter[205720]: ERROR   21:11:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:11:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:11:01 compute-0 nova_compute[186018]: 2026-01-05 21:11:01.969 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:02 compute-0 nova_compute[186018]: 2026-01-05 21:11:02.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:11:02 compute-0 nova_compute[186018]: 2026-01-05 21:11:02.893 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:03 compute-0 nova_compute[186018]: 2026-01-05 21:11:03.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:11:03 compute-0 nova_compute[186018]: 2026-01-05 21:11:03.464 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:11:04 compute-0 nova_compute[186018]: 2026-01-05 21:11:04.225 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:11:04 compute-0 nova_compute[186018]: 2026-01-05 21:11:04.226 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:11:04 compute-0 nova_compute[186018]: 2026-01-05 21:11:04.226 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:11:05 compute-0 podman[242309]: 2026-01-05 21:11:05.806183733 +0000 UTC m=+0.134262308 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, version=9.6, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, distribution-scope=public, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, vendor=Red Hat, Inc., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.469 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Updating instance_info_cache with network_info: [{"id": "f3274143-07c8-4956-b27c-98507a2443b2", "address": "fa:16:3e:13:ee:71", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.216", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3274143-07", "ovs_interfaceid": "f3274143-07c8-4956-b27c-98507a2443b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.489 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.490 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.491 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.492 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.516 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.518 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.519 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.520 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.622 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.730 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.108s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.731 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.813 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.814 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:11:06 compute-0 podman[242329]: 2026-01-05 21:11:06.846650416 +0000 UTC m=+0.186835924 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.899 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.900 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.973 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.980 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:11:06 compute-0 nova_compute[186018]: 2026-01-05 21:11:06.991 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:11:07 compute-0 nova_compute[186018]: 2026-01-05 21:11:07.090 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:11:07 compute-0 nova_compute[186018]: 2026-01-05 21:11:07.092 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:11:07 compute-0 nova_compute[186018]: 2026-01-05 21:11:07.178 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:11:07 compute-0 nova_compute[186018]: 2026-01-05 21:11:07.180 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:11:07 compute-0 nova_compute[186018]: 2026-01-05 21:11:07.245 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:11:07 compute-0 nova_compute[186018]: 2026-01-05 21:11:07.247 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:11:07 compute-0 nova_compute[186018]: 2026-01-05 21:11:07.357 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.110s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.780 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.780 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.781 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.782 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.796 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd0894ce8-3815-41f8-a495-2329081a9ed2', 'name': 'vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {'metering.server_group': 'a6371b97-6a0c-4b37-9443-eaf5410da4a4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:11:07 compute-0 nova_compute[186018]: 2026-01-05 21:11:07.801 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.801 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f64de408-e6d1-4f7f-9f94-e20a4c83a87a', 'name': 'test_0', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.802 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.803 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.803 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.803 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:07 compute-0 nova_compute[186018]: 2026-01-05 21:11:07.803 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5058MB free_disk=72.4006118774414GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:11:07 compute-0 nova_compute[186018]: 2026-01-05 21:11:07.804 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:11:07 compute-0 nova_compute[186018]: 2026-01-05 21:11:07.804 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.805 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.805 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.806 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.806 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.806 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.807 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:11:07.803546) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.808 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:11:07.806541) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.813 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.packets volume: 30 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.818 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.819 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.819 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.820 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.820 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.820 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.820 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.821 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:11:07.820752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.821 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.822 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.823 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.823 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.823 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.823 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.824 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.824 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.824 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.packets volume: 40 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.825 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.826 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:11:07.824407) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.826 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.826 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.827 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.827 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.827 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.827 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.828 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.829 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.829 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.830 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.830 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.bytes volume: 4718 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.830 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.831 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:11:07.827758) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.831 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:11:07.830004) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.832 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.833 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.833 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.833 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.834 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.834 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.bytes.delta volume: 140 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.834 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:11:07.833940) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.835 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.836 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.837 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.837 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.837 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.837 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.838 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.838 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.838 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:11:07.838154) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.838 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.839 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.840 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.840 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.841 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.841 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.841 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.841 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.842 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.843 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.843 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:11:07.841522) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.843 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.844 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.844 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.844 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.845 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:11:07.844550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.880 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.882 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.883 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 nova_compute[186018]: 2026-01-05 21:11:07.897 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.921 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.922 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.922 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.923 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.924 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.924 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.924 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.924 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.924 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.925 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.925 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.926 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.926 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.927 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.927 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.927 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.927 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.927 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.928 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.928 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.928 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.929 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:11:07.924828) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.929 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:11:07.928108) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.930 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.930 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.930 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.930 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.931 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.931 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.932 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:11:07.931411) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.968 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/memory.usage volume: 49.125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.988 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/memory.usage volume: 48.8828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.989 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.989 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.989 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.989 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.990 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.990 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.990 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.990 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.991 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.991 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.991 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.991 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.992 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.992 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.992 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.992 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.993 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.993 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.994 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.994 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.994 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.994 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.995 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.995 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.995 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:11:07.990135) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.995 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:11:07.991876) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:07.996 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:11:07.995175) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.094 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.095 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.096 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 nova_compute[186018]: 2026-01-05 21:11:08.107 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:11:08 compute-0 nova_compute[186018]: 2026-01-05 21:11:08.108 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance d0894ce8-3815-41f8-a495-2329081a9ed2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:11:08 compute-0 nova_compute[186018]: 2026-01-05 21:11:08.108 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:11:08 compute-0 nova_compute[186018]: 2026-01-05 21:11:08.108 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.215 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.216 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.217 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.218 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.218 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.218 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.219 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.219 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.219 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.219 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.bytes volume: 4807 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.220 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes volume: 1968 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.220 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.221 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.221 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.221 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.221 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.221 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:11:08.219459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.222 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.222 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.latency volume: 441838413 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:11:08.222081) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.223 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.latency volume: 97302278 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.223 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.latency volume: 82890817 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.224 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 488988741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.224 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 83667442 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.225 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 61020876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.225 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.226 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.226 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.226 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.226 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.226 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.227 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.227 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:11:08.226836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.227 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.228 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.228 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.228 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.229 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.230 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.230 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.230 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.230 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.231 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.231 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.231 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:11:08.231187) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.232 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.232 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.233 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.233 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.233 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.234 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.234 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.234 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.234 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.235 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.235 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.235 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.bytes volume: 41848832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.235 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:11:08.235139) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.235 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.236 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.236 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.237 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.237 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.237 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.237 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.238 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.238 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.238 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.238 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.238 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/cpu volume: 161860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.238 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/cpu volume: 36200000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.239 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.239 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.239 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:11:08.238355) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.239 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.239 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.239 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.239 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.239 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.latency volume: 1660248415 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.240 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.latency volume: 11989637 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.240 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.240 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 1391100422 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.240 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:11:08.239767) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.241 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 11839143 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.241 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.242 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.242 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.242 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.242 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.242 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.242 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.242 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.243 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.243 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.243 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:11:08.242651) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.243 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.244 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.244 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.244 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.245 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.245 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.245 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.245 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.245 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.245 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.245 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.245 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.245 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.246 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.246 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.246 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.246 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.246 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.246 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.246 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.246 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.246 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.246 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.246 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.246 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.246 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.246 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.247 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.247 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:11:08.247 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:11:08 compute-0 nova_compute[186018]: 2026-01-05 21:11:08.319 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:11:08 compute-0 nova_compute[186018]: 2026-01-05 21:11:08.332 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:11:08 compute-0 nova_compute[186018]: 2026-01-05 21:11:08.333 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:11:08 compute-0 nova_compute[186018]: 2026-01-05 21:11:08.334 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.530s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:11:08 compute-0 nova_compute[186018]: 2026-01-05 21:11:08.334 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:11:08 compute-0 nova_compute[186018]: 2026-01-05 21:11:08.335 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 21:11:08 compute-0 nova_compute[186018]: 2026-01-05 21:11:08.350 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 05 21:11:08 compute-0 nova_compute[186018]: 2026-01-05 21:11:08.356 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:11:08 compute-0 nova_compute[186018]: 2026-01-05 21:11:08.358 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 21:11:08 compute-0 nova_compute[186018]: 2026-01-05 21:11:08.370 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:11:09 compute-0 nova_compute[186018]: 2026-01-05 21:11:09.348 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:11:09 compute-0 nova_compute[186018]: 2026-01-05 21:11:09.349 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:11:09 compute-0 nova_compute[186018]: 2026-01-05 21:11:09.349 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:11:09 compute-0 nova_compute[186018]: 2026-01-05 21:11:09.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:11:11 compute-0 podman[242380]: 2026-01-05 21:11:11.767562581 +0000 UTC m=+0.106105960 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 21:11:11 compute-0 podman[242379]: 2026-01-05 21:11:11.7774749 +0000 UTC m=+0.125565570 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS)
Jan 05 21:11:11 compute-0 nova_compute[186018]: 2026-01-05 21:11:11.982 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:12 compute-0 nova_compute[186018]: 2026-01-05 21:11:12.901 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:16 compute-0 nova_compute[186018]: 2026-01-05 21:11:16.993 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:17 compute-0 nova_compute[186018]: 2026-01-05 21:11:17.906 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:20 compute-0 podman[242421]: 2026-01-05 21:11:20.790841518 +0000 UTC m=+0.133184320 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:11:22 compute-0 nova_compute[186018]: 2026-01-05 21:11:22.000 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:22 compute-0 nova_compute[186018]: 2026-01-05 21:11:22.909 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:24 compute-0 podman[242444]: 2026-01-05 21:11:24.776536266 +0000 UTC m=+0.123221079 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Jan 05 21:11:27 compute-0 nova_compute[186018]: 2026-01-05 21:11:27.003 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:27 compute-0 podman[242462]: 2026-01-05 21:11:27.817841017 +0000 UTC m=+0.149712313 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, build-date=2024-09-18T21:23:30, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, io.buildah.version=1.29.0, release=1214.1726694543, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release-0.7.12=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, architecture=x86_64, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc.)
Jan 05 21:11:27 compute-0 nova_compute[186018]: 2026-01-05 21:11:27.913 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:29 compute-0 podman[242483]: 2026-01-05 21:11:29.72385116 +0000 UTC m=+0.081681560 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 05 21:11:29 compute-0 podman[202426]: time="2026-01-05T21:11:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:11:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:11:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:11:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:11:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4350 "" "Go-http-client/1.1"
Jan 05 21:11:31 compute-0 openstack_network_exporter[205720]: ERROR   21:11:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:11:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:11:31 compute-0 openstack_network_exporter[205720]: ERROR   21:11:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:11:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:11:32 compute-0 nova_compute[186018]: 2026-01-05 21:11:32.006 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:32 compute-0 nova_compute[186018]: 2026-01-05 21:11:32.916 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:36 compute-0 podman[242503]: 2026-01-05 21:11:36.785942578 +0000 UTC m=+0.117504438 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, vendor=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.tags=minimal rhel9, vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=openstack_network_exporter)
Jan 05 21:11:37 compute-0 nova_compute[186018]: 2026-01-05 21:11:37.011 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:37 compute-0 podman[242524]: 2026-01-05 21:11:37.866223223 +0000 UTC m=+0.207693291 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 05 21:11:37 compute-0 nova_compute[186018]: 2026-01-05 21:11:37.921 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:42 compute-0 nova_compute[186018]: 2026-01-05 21:11:42.014 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:42 compute-0 podman[242549]: 2026-01-05 21:11:42.751058873 +0000 UTC m=+0.088580021 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 05 21:11:42 compute-0 podman[242550]: 2026-01-05 21:11:42.760480369 +0000 UTC m=+0.098491180 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:11:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:11:42.845 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:11:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:11:42.846 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:11:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:11:42.846 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:11:42 compute-0 nova_compute[186018]: 2026-01-05 21:11:42.926 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:47 compute-0 nova_compute[186018]: 2026-01-05 21:11:47.018 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:47 compute-0 nova_compute[186018]: 2026-01-05 21:11:47.931 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:51 compute-0 podman[242588]: 2026-01-05 21:11:51.77587857 +0000 UTC m=+0.110374761 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:11:52 compute-0 nova_compute[186018]: 2026-01-05 21:11:52.021 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:52 compute-0 nova_compute[186018]: 2026-01-05 21:11:52.936 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:55 compute-0 podman[242613]: 2026-01-05 21:11:55.801923005 +0000 UTC m=+0.136242860 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true)
Jan 05 21:11:57 compute-0 nova_compute[186018]: 2026-01-05 21:11:57.026 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:57 compute-0 nova_compute[186018]: 2026-01-05 21:11:57.939 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:11:58 compute-0 nova_compute[186018]: 2026-01-05 21:11:58.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:11:58 compute-0 nova_compute[186018]: 2026-01-05 21:11:58.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:11:58 compute-0 podman[242633]: 2026-01-05 21:11:58.772455273 +0000 UTC m=+0.120106527 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, container_name=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, distribution-scope=public, release=1214.1726694543, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30)
Jan 05 21:11:59 compute-0 podman[202426]: time="2026-01-05T21:11:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:11:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:11:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:11:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:11:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4359 "" "Go-http-client/1.1"
Jan 05 21:12:00 compute-0 podman[242653]: 2026-01-05 21:12:00.737036891 +0000 UTC m=+0.090846481 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute)
Jan 05 21:12:01 compute-0 openstack_network_exporter[205720]: ERROR   21:12:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:12:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:12:01 compute-0 openstack_network_exporter[205720]: ERROR   21:12:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:12:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:12:02 compute-0 nova_compute[186018]: 2026-01-05 21:12:02.029 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:02 compute-0 nova_compute[186018]: 2026-01-05 21:12:02.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:12:02 compute-0 nova_compute[186018]: 2026-01-05 21:12:02.945 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:03 compute-0 nova_compute[186018]: 2026-01-05 21:12:03.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:12:03 compute-0 nova_compute[186018]: 2026-01-05 21:12:03.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:12:03 compute-0 nova_compute[186018]: 2026-01-05 21:12:03.463 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:12:03 compute-0 nova_compute[186018]: 2026-01-05 21:12:03.765 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:12:03 compute-0 nova_compute[186018]: 2026-01-05 21:12:03.766 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:12:03 compute-0 nova_compute[186018]: 2026-01-05 21:12:03.767 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:12:03 compute-0 nova_compute[186018]: 2026-01-05 21:12:03.767 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f64de408-e6d1-4f7f-9f94-e20a4c83a87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:12:05 compute-0 nova_compute[186018]: 2026-01-05 21:12:05.302 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updating instance_info_cache with network_info: [{"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:12:05 compute-0 nova_compute[186018]: 2026-01-05 21:12:05.330 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:12:05 compute-0 nova_compute[186018]: 2026-01-05 21:12:05.331 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:12:06 compute-0 nova_compute[186018]: 2026-01-05 21:12:06.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:12:06 compute-0 nova_compute[186018]: 2026-01-05 21:12:06.573 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:06.575 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=5, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=4) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:12:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:06.576 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 3 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:12:07 compute-0 nova_compute[186018]: 2026-01-05 21:12:07.033 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:07 compute-0 nova_compute[186018]: 2026-01-05 21:12:07.456 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:12:07 compute-0 podman[242673]: 2026-01-05 21:12:07.76940532 +0000 UTC m=+0.116463262 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., name=ubi9-minimal, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, distribution-scope=public, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9)
Jan 05 21:12:07 compute-0 nova_compute[186018]: 2026-01-05 21:12:07.948 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:08 compute-0 nova_compute[186018]: 2026-01-05 21:12:08.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:12:08 compute-0 nova_compute[186018]: 2026-01-05 21:12:08.486 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:12:08 compute-0 nova_compute[186018]: 2026-01-05 21:12:08.488 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:12:08 compute-0 nova_compute[186018]: 2026-01-05 21:12:08.489 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:12:08 compute-0 nova_compute[186018]: 2026-01-05 21:12:08.490 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:12:08 compute-0 nova_compute[186018]: 2026-01-05 21:12:08.605 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:08 compute-0 nova_compute[186018]: 2026-01-05 21:12:08.689 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:08 compute-0 nova_compute[186018]: 2026-01-05 21:12:08.691 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:08 compute-0 nova_compute[186018]: 2026-01-05 21:12:08.752 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:08 compute-0 nova_compute[186018]: 2026-01-05 21:12:08.753 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:08 compute-0 nova_compute[186018]: 2026-01-05 21:12:08.810 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:08 compute-0 nova_compute[186018]: 2026-01-05 21:12:08.813 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:08 compute-0 podman[242693]: 2026-01-05 21:12:08.818615182 +0000 UTC m=+0.163969586 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 05 21:12:08 compute-0 nova_compute[186018]: 2026-01-05 21:12:08.899 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:08 compute-0 nova_compute[186018]: 2026-01-05 21:12:08.905 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:08 compute-0 nova_compute[186018]: 2026-01-05 21:12:08.979 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:08 compute-0 nova_compute[186018]: 2026-01-05 21:12:08.981 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.038 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.042 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.105 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.109 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.173 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.529 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.531 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5057MB free_disk=72.40069961547852GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.531 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.532 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:12:09 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:09.579 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '5'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.653 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.654 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance d0894ce8-3815-41f8-a495-2329081a9ed2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.654 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.654 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.677 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing inventories for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.705 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating ProviderTree inventory for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.706 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating inventory in ProviderTree for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.723 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing aggregate associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.749 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing trait associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_NODE,HW_CPU_X86_BMI,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.818 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.845 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.850 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:12:09 compute-0 nova_compute[186018]: 2026-01-05 21:12:09.851 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.319s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:12:10 compute-0 nova_compute[186018]: 2026-01-05 21:12:10.854 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:12:10 compute-0 nova_compute[186018]: 2026-01-05 21:12:10.855 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:12:10 compute-0 nova_compute[186018]: 2026-01-05 21:12:10.855 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:12:11 compute-0 nova_compute[186018]: 2026-01-05 21:12:11.457 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:12:12 compute-0 nova_compute[186018]: 2026-01-05 21:12:12.036 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:12 compute-0 nova_compute[186018]: 2026-01-05 21:12:12.952 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:13 compute-0 podman[242745]: 2026-01-05 21:12:13.739523905 +0000 UTC m=+0.073900056 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:12:13 compute-0 podman[242744]: 2026-01-05 21:12:13.772728495 +0000 UTC m=+0.117584301 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible)
Jan 05 21:12:14 compute-0 nova_compute[186018]: 2026-01-05 21:12:14.778 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "bc5c255f-3071-4754-9c2a-302e6237171f" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:12:14 compute-0 nova_compute[186018]: 2026-01-05 21:12:14.778 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "bc5c255f-3071-4754-9c2a-302e6237171f" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:12:14 compute-0 nova_compute[186018]: 2026-01-05 21:12:14.791 186022 DEBUG nova.compute.manager [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 21:12:14 compute-0 nova_compute[186018]: 2026-01-05 21:12:14.890 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:12:14 compute-0 nova_compute[186018]: 2026-01-05 21:12:14.891 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:12:14 compute-0 nova_compute[186018]: 2026-01-05 21:12:14.898 186022 DEBUG nova.virt.hardware [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 21:12:14 compute-0 nova_compute[186018]: 2026-01-05 21:12:14.899 186022 INFO nova.compute.claims [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Claim successful on node compute-0.ctlplane.example.com
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.068 186022 DEBUG nova.compute.provider_tree [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.083 186022 DEBUG nova.scheduler.client.report [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.104 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.104 186022 DEBUG nova.compute.manager [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.155 186022 DEBUG nova.compute.manager [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.156 186022 DEBUG nova.network.neutron [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.188 186022 INFO nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.264 186022 DEBUG nova.compute.manager [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.563 186022 DEBUG nova.compute.manager [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.566 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.567 186022 INFO nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Creating image(s)
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.568 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "/var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.569 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.570 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.597 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.685 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.686 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.687 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.700 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.766 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.767 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec,backing_fmt=raw /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.803 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec,backing_fmt=raw /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk 1073741824" returned: 0 in 0.036s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.805 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.117s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.805 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.860 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.861 186022 DEBUG nova.virt.disk.api [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Checking if we can resize image /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.862 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.921 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.922 186022 DEBUG nova.virt.disk.api [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Cannot resize image /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.922 186022 DEBUG nova.objects.instance [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lazy-loading 'migration_context' on Instance uuid bc5c255f-3071-4754-9c2a-302e6237171f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.940 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "/var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.941 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.942 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:12:15 compute-0 nova_compute[186018]: 2026-01-05 21:12:15.954 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:16 compute-0 nova_compute[186018]: 2026-01-05 21:12:16.018 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:16 compute-0 nova_compute[186018]: 2026-01-05 21:12:16.019 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:12:16 compute-0 nova_compute[186018]: 2026-01-05 21:12:16.021 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:12:16 compute-0 nova_compute[186018]: 2026-01-05 21:12:16.048 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:16 compute-0 nova_compute[186018]: 2026-01-05 21:12:16.118 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:16 compute-0 nova_compute[186018]: 2026-01-05 21:12:16.119 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:16 compute-0 nova_compute[186018]: 2026-01-05 21:12:16.163 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:16 compute-0 nova_compute[186018]: 2026-01-05 21:12:16.165 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.144s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:12:16 compute-0 nova_compute[186018]: 2026-01-05 21:12:16.166 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:16 compute-0 nova_compute[186018]: 2026-01-05 21:12:16.224 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:16 compute-0 nova_compute[186018]: 2026-01-05 21:12:16.226 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 21:12:16 compute-0 nova_compute[186018]: 2026-01-05 21:12:16.227 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Ensure instance console log exists: /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 21:12:16 compute-0 nova_compute[186018]: 2026-01-05 21:12:16.227 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:12:16 compute-0 nova_compute[186018]: 2026-01-05 21:12:16.228 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:12:16 compute-0 nova_compute[186018]: 2026-01-05 21:12:16.229 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:12:17 compute-0 nova_compute[186018]: 2026-01-05 21:12:17.038 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:17 compute-0 nova_compute[186018]: 2026-01-05 21:12:17.956 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:19 compute-0 nova_compute[186018]: 2026-01-05 21:12:19.422 186022 DEBUG nova.network.neutron [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Successfully updated port: 2fb09e12-6360-4c5c-be29-1c3782724ceb _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 21:12:19 compute-0 nova_compute[186018]: 2026-01-05 21:12:19.441 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "refresh_cache-bc5c255f-3071-4754-9c2a-302e6237171f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:12:19 compute-0 nova_compute[186018]: 2026-01-05 21:12:19.441 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquired lock "refresh_cache-bc5c255f-3071-4754-9c2a-302e6237171f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:12:19 compute-0 nova_compute[186018]: 2026-01-05 21:12:19.441 186022 DEBUG nova.network.neutron [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 21:12:19 compute-0 nova_compute[186018]: 2026-01-05 21:12:19.567 186022 DEBUG nova.compute.manager [req-ea47a54e-be80-4b41-9f25-c2f37553f7ef req-abea6da4-c1ec-43fa-bd89-acec3ba35e80 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Received event network-changed-2fb09e12-6360-4c5c-be29-1c3782724ceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:12:19 compute-0 nova_compute[186018]: 2026-01-05 21:12:19.568 186022 DEBUG nova.compute.manager [req-ea47a54e-be80-4b41-9f25-c2f37553f7ef req-abea6da4-c1ec-43fa-bd89-acec3ba35e80 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Refreshing instance network info cache due to event network-changed-2fb09e12-6360-4c5c-be29-1c3782724ceb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:12:19 compute-0 nova_compute[186018]: 2026-01-05 21:12:19.568 186022 DEBUG oslo_concurrency.lockutils [req-ea47a54e-be80-4b41-9f25-c2f37553f7ef req-abea6da4-c1ec-43fa-bd89-acec3ba35e80 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-bc5c255f-3071-4754-9c2a-302e6237171f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:12:20 compute-0 nova_compute[186018]: 2026-01-05 21:12:20.048 186022 DEBUG nova.network.neutron [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.367 186022 DEBUG nova.network.neutron [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Updating instance_info_cache with network_info: [{"id": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "address": "fa:16:3e:22:cf:e6", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.15", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb09e12-63", "ovs_interfaceid": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.397 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Releasing lock "refresh_cache-bc5c255f-3071-4754-9c2a-302e6237171f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.398 186022 DEBUG nova.compute.manager [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Instance network_info: |[{"id": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "address": "fa:16:3e:22:cf:e6", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.15", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb09e12-63", "ovs_interfaceid": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.398 186022 DEBUG oslo_concurrency.lockutils [req-ea47a54e-be80-4b41-9f25-c2f37553f7ef req-abea6da4-c1ec-43fa-bd89-acec3ba35e80 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-bc5c255f-3071-4754-9c2a-302e6237171f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.399 186022 DEBUG nova.network.neutron [req-ea47a54e-be80-4b41-9f25-c2f37553f7ef req-abea6da4-c1ec-43fa-bd89-acec3ba35e80 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Refreshing network info cache for port 2fb09e12-6360-4c5c-be29-1c3782724ceb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.402 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Start _get_guest_xml network_info=[{"id": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "address": "fa:16:3e:22:cf:e6", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.15", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb09e12-63", "ovs_interfaceid": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T21:05:05Z,direct_url=<?>,disk_format='qcow2',id=31cf9c34-2e56-49e9-bb98-955ac3cf9185,min_disk=0,min_ram=0,name='cirros',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T21:05:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'encrypted': False, 'encryption_format': None, 'image_id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}], 'ephemerals': [{'guest_format': None, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 1, 'encrypted': False, 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.411 186022 WARNING nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.418 186022 DEBUG nova.virt.libvirt.host [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.419 186022 DEBUG nova.virt.libvirt.host [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.430 186022 DEBUG nova.virt.libvirt.host [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.431 186022 DEBUG nova.virt.libvirt.host [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.432 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.432 186022 DEBUG nova.virt.hardware [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T21:05:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='d9d5992a-1c00-4233-a43d-71321ed82446',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T21:05:05Z,direct_url=<?>,disk_format='qcow2',id=31cf9c34-2e56-49e9-bb98-955ac3cf9185,min_disk=0,min_ram=0,name='cirros',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T21:05:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.433 186022 DEBUG nova.virt.hardware [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.433 186022 DEBUG nova.virt.hardware [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.434 186022 DEBUG nova.virt.hardware [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.434 186022 DEBUG nova.virt.hardware [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.435 186022 DEBUG nova.virt.hardware [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.435 186022 DEBUG nova.virt.hardware [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.436 186022 DEBUG nova.virt.hardware [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.436 186022 DEBUG nova.virt.hardware [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.437 186022 DEBUG nova.virt.hardware [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.438 186022 DEBUG nova.virt.hardware [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.443 186022 DEBUG nova.virt.libvirt.vif [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:12:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-ezpxu27-aposstbqe4u5-3vxh7p6lsvtd-vnf-iw64z6vmzv3z',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ezpxu27-aposstbqe4u5-3vxh7p6lsvtd-vnf-iw64z6vmzv3z',id=3,image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a6371b97-6a0c-4b37-9443-eaf5410da4a4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='704814115a61471f9b45484171f67b5f',ramdisk_id='',reservation_id='r-le2fg87b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:12:15Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMjA4NzUyMzk2NjE1NTEzNjUzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAyMDg3NTIzOTY2MTU1MTM2NTM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDIwODc1MjM5NjYxNTUxMzY1Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAyMDg3NTIzOTY2MTU1MTM2NTM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMjA4NzUyMzk2NjE1NTEzNjUzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMjA4NzUyMzk2NjE1NTEzNjUzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 05 21:12:21 compute-0 nova_compute[186018]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDIwODc1MjM5NjYxNTUxMzY1Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTAyMDg3NTIzOTY2MTU1MTM2NTM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wMjA4NzUyMzk2NjE1NTEzNjUzPT0tLQo=',user_id='41f377b42540490198f271301cf5fe90',uuid=bc5c255f-3071-4754-9c2a-302e6237171f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "address": "fa:16:3e:22:cf:e6", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.15", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb09e12-63", "ovs_interfaceid": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.443 186022 DEBUG nova.network.os_vif_util [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converting VIF {"id": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "address": "fa:16:3e:22:cf:e6", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.15", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb09e12-63", "ovs_interfaceid": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.444 186022 DEBUG nova.network.os_vif_util [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:cf:e6,bridge_name='br-int',has_traffic_filtering=True,id=2fb09e12-6360-4c5c-be29-1c3782724ceb,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2fb09e12-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.446 186022 DEBUG nova.objects.instance [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lazy-loading 'pci_devices' on Instance uuid bc5c255f-3071-4754-9c2a-302e6237171f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.470 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] End _get_guest_xml xml=<domain type="kvm">
Jan 05 21:12:21 compute-0 nova_compute[186018]:   <uuid>bc5c255f-3071-4754-9c2a-302e6237171f</uuid>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   <name>instance-00000003</name>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   <memory>524288</memory>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   <vcpu>1</vcpu>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   <metadata>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <nova:name>vn-ezpxu27-aposstbqe4u5-3vxh7p6lsvtd-vnf-iw64z6vmzv3z</nova:name>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <nova:creationTime>2026-01-05 21:12:21</nova:creationTime>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <nova:flavor name="m1.small">
Jan 05 21:12:21 compute-0 nova_compute[186018]:         <nova:memory>512</nova:memory>
Jan 05 21:12:21 compute-0 nova_compute[186018]:         <nova:disk>1</nova:disk>
Jan 05 21:12:21 compute-0 nova_compute[186018]:         <nova:swap>0</nova:swap>
Jan 05 21:12:21 compute-0 nova_compute[186018]:         <nova:ephemeral>1</nova:ephemeral>
Jan 05 21:12:21 compute-0 nova_compute[186018]:         <nova:vcpus>1</nova:vcpus>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       </nova:flavor>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <nova:owner>
Jan 05 21:12:21 compute-0 nova_compute[186018]:         <nova:user uuid="41f377b42540490198f271301cf5fe90">admin</nova:user>
Jan 05 21:12:21 compute-0 nova_compute[186018]:         <nova:project uuid="704814115a61471f9b45484171f67b5f">admin</nova:project>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       </nova:owner>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <nova:root type="image" uuid="31cf9c34-2e56-49e9-bb98-955ac3cf9185"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <nova:ports>
Jan 05 21:12:21 compute-0 nova_compute[186018]:         <nova:port uuid="2fb09e12-6360-4c5c-be29-1c3782724ceb">
Jan 05 21:12:21 compute-0 nova_compute[186018]:           <nova:ip type="fixed" address="192.168.0.15" ipVersion="4"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:         </nova:port>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       </nova:ports>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     </nova:instance>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   </metadata>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   <sysinfo type="smbios">
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <system>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <entry name="manufacturer">RDO</entry>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <entry name="product">OpenStack Compute</entry>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <entry name="serial">bc5c255f-3071-4754-9c2a-302e6237171f</entry>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <entry name="uuid">bc5c255f-3071-4754-9c2a-302e6237171f</entry>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <entry name="family">Virtual Machine</entry>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     </system>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   </sysinfo>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   <os>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <boot dev="hd"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <smbios mode="sysinfo"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   </os>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   <features>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <acpi/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <apic/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <vmcoreinfo/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   </features>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   <clock offset="utc">
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <timer name="hpet" present="no"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   </clock>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   <cpu mode="host-model" match="exact">
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   </cpu>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   <devices>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <target dev="vda" bus="virtio"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <target dev="vdb" bus="virtio"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <disk type="file" device="cdrom">
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.config"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <target dev="sda" bus="sata"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <interface type="ethernet">
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <mac address="fa:16:3e:22:cf:e6"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <mtu size="1442"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <target dev="tap2fb09e12-63"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     </interface>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <serial type="pty">
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <log file="/var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/console.log" append="off"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     </serial>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <video>
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     </video>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <input type="tablet" bus="usb"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <rng model="virtio">
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <backend model="random">/dev/urandom</backend>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     </rng>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <controller type="usb" index="0"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     <memballoon model="virtio">
Jan 05 21:12:21 compute-0 nova_compute[186018]:       <stats period="10"/>
Jan 05 21:12:21 compute-0 nova_compute[186018]:     </memballoon>
Jan 05 21:12:21 compute-0 nova_compute[186018]:   </devices>
Jan 05 21:12:21 compute-0 nova_compute[186018]: </domain>
Jan 05 21:12:21 compute-0 nova_compute[186018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.471 186022 DEBUG nova.compute.manager [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Preparing to wait for external event network-vif-plugged-2fb09e12-6360-4c5c-be29-1c3782724ceb prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.472 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.472 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.472 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.473 186022 DEBUG nova.virt.libvirt.vif [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:12:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-ezpxu27-aposstbqe4u5-3vxh7p6lsvtd-vnf-iw64z6vmzv3z',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ezpxu27-aposstbqe4u5-3vxh7p6lsvtd-vnf-iw64z6vmzv3z',id=3,image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a6371b97-6a0c-4b37-9443-eaf5410da4a4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='704814115a61471f9b45484171f67b5f',ramdisk_id='',reservation_id='r-le2fg87b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:12:15Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMjA4NzUyMzk2NjE1NTEzNjUzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAyMDg3NTIzOTY2MTU1MTM2NTM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDIwODc1MjM5NjYxNTUxMzY1Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAyMDg3NTIzOTY2MTU1MTM2NTM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMjA4NzUyMzk2NjE1NTEzNjUzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMjA4NzUyMzk2NjE1NTEzNjUzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9
Jan 05 21:12:21 compute-0 nova_compute[186018]: wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDIwODc1MjM5NjYxNTUxMzY1Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTAyMDg3NTIzOTY2MTU1MTM2NTM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wMjA4NzUyMzk2NjE1NTEzNjUzPT0tLQo=',user_id='41f377b42540490198f271301cf5fe90',uuid=bc5c255f-3071-4754-9c2a-302e6237171f,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "address": "fa:16:3e:22:cf:e6", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.15", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb09e12-63", "ovs_interfaceid": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.473 186022 DEBUG nova.network.os_vif_util [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converting VIF {"id": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "address": "fa:16:3e:22:cf:e6", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.15", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb09e12-63", "ovs_interfaceid": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.474 186022 DEBUG nova.network.os_vif_util [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:22:cf:e6,bridge_name='br-int',has_traffic_filtering=True,id=2fb09e12-6360-4c5c-be29-1c3782724ceb,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2fb09e12-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.474 186022 DEBUG os_vif [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:cf:e6,bridge_name='br-int',has_traffic_filtering=True,id=2fb09e12-6360-4c5c-be29-1c3782724ceb,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2fb09e12-63') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.475 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.475 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.476 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.479 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.480 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap2fb09e12-63, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.480 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap2fb09e12-63, col_values=(('external_ids', {'iface-id': '2fb09e12-6360-4c5c-be29-1c3782724ceb', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:22:cf:e6', 'vm-uuid': 'bc5c255f-3071-4754-9c2a-302e6237171f'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.482 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.484 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:12:21 compute-0 NetworkManager[56598]: <info>  [1767647541.4868] manager: (tap2fb09e12-63): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/29)
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.492 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.494 186022 INFO os_vif [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:22:cf:e6,bridge_name='br-int',has_traffic_filtering=True,id=2fb09e12-6360-4c5c-be29-1c3782724ceb,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2fb09e12-63')
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.546 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.546 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.546 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.547 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No VIF found with MAC fa:16:3e:22:cf:e6, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 21:12:21 compute-0 nova_compute[186018]: 2026-01-05 21:12:21.547 186022 INFO nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Using config drive
Jan 05 21:12:21 compute-0 rsyslogd[237695]: message too long (8192) with configured size 8096, begin of message is: 2026-01-05 21:12:21.443 186022 DEBUG nova.virt.libvirt.vif [None req-cb2f9ec9-46 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:12:21 compute-0 rsyslogd[237695]: message too long (8192) with configured size 8096, begin of message is: 2026-01-05 21:12:21.473 186022 DEBUG nova.virt.libvirt.vif [None req-cb2f9ec9-46 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:12:22 compute-0 nova_compute[186018]: 2026-01-05 21:12:22.042 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:22 compute-0 nova_compute[186018]: 2026-01-05 21:12:22.447 186022 INFO nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Creating config drive at /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.config
Jan 05 21:12:22 compute-0 nova_compute[186018]: 2026-01-05 21:12:22.457 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprz7mkpby execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:12:22 compute-0 nova_compute[186018]: 2026-01-05 21:12:22.587 186022 DEBUG oslo_concurrency.processutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmprz7mkpby" returned: 0 in 0.130s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:12:22 compute-0 kernel: tap2fb09e12-63: entered promiscuous mode
Jan 05 21:12:22 compute-0 NetworkManager[56598]: <info>  [1767647542.6775] manager: (tap2fb09e12-63): new Tun device (/org/freedesktop/NetworkManager/Devices/30)
Jan 05 21:12:22 compute-0 ovn_controller[98229]: 2026-01-05T21:12:22Z|00046|binding|INFO|Claiming lport 2fb09e12-6360-4c5c-be29-1c3782724ceb for this chassis.
Jan 05 21:12:22 compute-0 ovn_controller[98229]: 2026-01-05T21:12:22Z|00047|binding|INFO|2fb09e12-6360-4c5c-be29-1c3782724ceb: Claiming fa:16:3e:22:cf:e6 192.168.0.15
Jan 05 21:12:22 compute-0 nova_compute[186018]: 2026-01-05 21:12:22.680 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:22.686 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:cf:e6 192.168.0.15'], port_security=['fa:16:3e:22:cf:e6 192.168.0.15'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-3m37qezpxu27-aposstbqe4u5-3vxh7p6lsvtd-port-jshloneuhom7', 'neutron:cidrs': '192.168.0.15/24', 'neutron:device_id': 'bc5c255f-3071-4754-9c2a-302e6237171f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-3m37qezpxu27-aposstbqe4u5-3vxh7p6lsvtd-port-jshloneuhom7', 'neutron:project_id': '704814115a61471f9b45484171f67b5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '02c7eb5a-98f1-49fb-80bc-9ee05faa964b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0df9bc1d-5579-4059-ac66-a97b4c7350db, chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=2fb09e12-6360-4c5c-be29-1c3782724ceb) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:12:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:22.688 107689 INFO neutron.agent.ovn.metadata.agent [-] Port 2fb09e12-6360-4c5c-be29-1c3782724ceb in datapath b871481f-0445-42f2-8b6a-2e8572ae5b49 bound to our chassis
Jan 05 21:12:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:22.689 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b871481f-0445-42f2-8b6a-2e8572ae5b49
Jan 05 21:12:22 compute-0 ovn_controller[98229]: 2026-01-05T21:12:22Z|00048|binding|INFO|Setting lport 2fb09e12-6360-4c5c-be29-1c3782724ceb ovn-installed in OVS
Jan 05 21:12:22 compute-0 ovn_controller[98229]: 2026-01-05T21:12:22Z|00049|binding|INFO|Setting lport 2fb09e12-6360-4c5c-be29-1c3782724ceb up in Southbound
Jan 05 21:12:22 compute-0 nova_compute[186018]: 2026-01-05 21:12:22.699 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:22 compute-0 nova_compute[186018]: 2026-01-05 21:12:22.705 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:22.715 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[a733e751-d140-423b-8a1e-8b3d1b45bc48]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:12:22 compute-0 systemd-udevd[242850]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:12:22 compute-0 systemd-machined[157312]: New machine qemu-3-instance-00000003.
Jan 05 21:12:22 compute-0 systemd[1]: Started Virtual Machine qemu-3-instance-00000003.
Jan 05 21:12:22 compute-0 NetworkManager[56598]: <info>  [1767647542.7415] device (tap2fb09e12-63): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 21:12:22 compute-0 NetworkManager[56598]: <info>  [1767647542.7429] device (tap2fb09e12-63): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 21:12:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:22.742 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[b9b2510c-9884-4ca8-9736-a2e18487aebb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:12:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:22.746 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[ad2426fc-9886-4029-b26e-f0bfd32a8219]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:12:22 compute-0 podman[242823]: 2026-01-05 21:12:22.753488229 +0000 UTC m=+0.107039124 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:12:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:22.781 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[dc626335-139f-48e5-88a2-b829989c9997]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:12:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:22.798 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[29af218f-0bea-404b-b849-fdb2e01de676]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb871481f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:f0:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 7, 'rx_bytes': 532, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393151, 'reachable_time': 16123, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 242867, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:12:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:22.815 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[ffc20992-416c-437f-9bb7-2f82b225d382]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb871481f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393170, 'tstamp': 393170}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242872, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapb871481f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393175, 'tstamp': 393175}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 242872, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:12:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:22.818 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb871481f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:12:22 compute-0 nova_compute[186018]: 2026-01-05 21:12:22.820 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:22 compute-0 nova_compute[186018]: 2026-01-05 21:12:22.821 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:22.822 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb871481f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:12:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:22.822 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:12:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:22.823 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb871481f-00, col_values=(('external_ids', {'iface-id': 'a16ac18f-2e71-4427-b368-840ecfba3d33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:12:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:22.823 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.375 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767647543.3740513, bc5c255f-3071-4754-9c2a-302e6237171f => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.375 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] VM Started (Lifecycle Event)
Jan 05 21:12:23 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.457 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:12:23 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.466 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767647543.3789372, bc5c255f-3071-4754-9c2a-302e6237171f => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.466 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] VM Paused (Lifecycle Event)
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.496 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.501 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.543 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.656 186022 DEBUG nova.network.neutron [req-ea47a54e-be80-4b41-9f25-c2f37553f7ef req-abea6da4-c1ec-43fa-bd89-acec3ba35e80 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Updated VIF entry in instance network info cache for port 2fb09e12-6360-4c5c-be29-1c3782724ceb. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.656 186022 DEBUG nova.network.neutron [req-ea47a54e-be80-4b41-9f25-c2f37553f7ef req-abea6da4-c1ec-43fa-bd89-acec3ba35e80 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Updating instance_info_cache with network_info: [{"id": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "address": "fa:16:3e:22:cf:e6", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.15", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb09e12-63", "ovs_interfaceid": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.689 186022 DEBUG oslo_concurrency.lockutils [req-ea47a54e-be80-4b41-9f25-c2f37553f7ef req-abea6da4-c1ec-43fa-bd89-acec3ba35e80 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-bc5c255f-3071-4754-9c2a-302e6237171f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.728 186022 DEBUG nova.compute.manager [req-a21cd2f3-02b0-4f5f-9edc-c91f3b584390 req-f50b3625-a829-4e09-a4c5-182b1a4b0961 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Received event network-vif-plugged-2fb09e12-6360-4c5c-be29-1c3782724ceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.728 186022 DEBUG oslo_concurrency.lockutils [req-a21cd2f3-02b0-4f5f-9edc-c91f3b584390 req-f50b3625-a829-4e09-a4c5-182b1a4b0961 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.729 186022 DEBUG oslo_concurrency.lockutils [req-a21cd2f3-02b0-4f5f-9edc-c91f3b584390 req-f50b3625-a829-4e09-a4c5-182b1a4b0961 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.729 186022 DEBUG oslo_concurrency.lockutils [req-a21cd2f3-02b0-4f5f-9edc-c91f3b584390 req-f50b3625-a829-4e09-a4c5-182b1a4b0961 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.729 186022 DEBUG nova.compute.manager [req-a21cd2f3-02b0-4f5f-9edc-c91f3b584390 req-f50b3625-a829-4e09-a4c5-182b1a4b0961 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Processing event network-vif-plugged-2fb09e12-6360-4c5c-be29-1c3782724ceb _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.730 186022 DEBUG nova.compute.manager [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.754 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767647543.7526186, bc5c255f-3071-4754-9c2a-302e6237171f => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.754 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] VM Resumed (Lifecycle Event)
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.756 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.766 186022 INFO nova.virt.libvirt.driver [-] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Instance spawned successfully.
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.766 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.806 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.812 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.823 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.825 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.826 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.826 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.827 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.827 186022 DEBUG nova.virt.libvirt.driver [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.833 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.880 186022 INFO nova.compute.manager [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Took 8.32 seconds to spawn the instance on the hypervisor.
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.880 186022 DEBUG nova.compute.manager [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.941 186022 INFO nova.compute.manager [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Took 9.09 seconds to build instance.
Jan 05 21:12:23 compute-0 nova_compute[186018]: 2026-01-05 21:12:23.959 186022 DEBUG oslo_concurrency.lockutils [None req-cb2f9ec9-4643-4ac1-90f3-132f2c91fe4b 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "bc5c255f-3071-4754-9c2a-302e6237171f" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:12:25 compute-0 nova_compute[186018]: 2026-01-05 21:12:25.823 186022 DEBUG nova.compute.manager [req-7ffbcdf3-1d9d-489d-921e-5601a2efb8b1 req-e9ad4890-bede-4bd8-996b-a40f894d592e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Received event network-vif-plugged-2fb09e12-6360-4c5c-be29-1c3782724ceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:12:25 compute-0 nova_compute[186018]: 2026-01-05 21:12:25.823 186022 DEBUG oslo_concurrency.lockutils [req-7ffbcdf3-1d9d-489d-921e-5601a2efb8b1 req-e9ad4890-bede-4bd8-996b-a40f894d592e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:12:25 compute-0 nova_compute[186018]: 2026-01-05 21:12:25.824 186022 DEBUG oslo_concurrency.lockutils [req-7ffbcdf3-1d9d-489d-921e-5601a2efb8b1 req-e9ad4890-bede-4bd8-996b-a40f894d592e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:12:25 compute-0 nova_compute[186018]: 2026-01-05 21:12:25.824 186022 DEBUG oslo_concurrency.lockutils [req-7ffbcdf3-1d9d-489d-921e-5601a2efb8b1 req-e9ad4890-bede-4bd8-996b-a40f894d592e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:12:25 compute-0 nova_compute[186018]: 2026-01-05 21:12:25.824 186022 DEBUG nova.compute.manager [req-7ffbcdf3-1d9d-489d-921e-5601a2efb8b1 req-e9ad4890-bede-4bd8-996b-a40f894d592e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] No waiting events found dispatching network-vif-plugged-2fb09e12-6360-4c5c-be29-1c3782724ceb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:12:25 compute-0 nova_compute[186018]: 2026-01-05 21:12:25.825 186022 WARNING nova.compute.manager [req-7ffbcdf3-1d9d-489d-921e-5601a2efb8b1 req-e9ad4890-bede-4bd8-996b-a40f894d592e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Received unexpected event network-vif-plugged-2fb09e12-6360-4c5c-be29-1c3782724ceb for instance with vm_state active and task_state None.
Jan 05 21:12:26 compute-0 nova_compute[186018]: 2026-01-05 21:12:26.484 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:26 compute-0 podman[242900]: 2026-01-05 21:12:26.737429041 +0000 UTC m=+0.090602294 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 05 21:12:27 compute-0 nova_compute[186018]: 2026-01-05 21:12:27.045 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:29 compute-0 podman[242919]: 2026-01-05 21:12:29.737629246 +0000 UTC m=+0.084454703 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, managed_by=edpm_ansible, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, version=9.4, config_id=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, release=1214.1726694543, release-0.7.12=, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler)
Jan 05 21:12:29 compute-0 podman[202426]: time="2026-01-05T21:12:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:12:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:12:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:12:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:12:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4361 "" "Go-http-client/1.1"
Jan 05 21:12:31 compute-0 openstack_network_exporter[205720]: ERROR   21:12:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:12:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:12:31 compute-0 openstack_network_exporter[205720]: ERROR   21:12:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:12:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:12:31 compute-0 nova_compute[186018]: 2026-01-05 21:12:31.487 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:31 compute-0 podman[242937]: 2026-01-05 21:12:31.745841567 +0000 UTC m=+0.099147258 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Jan 05 21:12:32 compute-0 nova_compute[186018]: 2026-01-05 21:12:32.049 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:36 compute-0 nova_compute[186018]: 2026-01-05 21:12:36.491 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:37 compute-0 nova_compute[186018]: 2026-01-05 21:12:37.052 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:38 compute-0 podman[242953]: 2026-01-05 21:12:38.764372394 +0000 UTC m=+0.118347281 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, io.openshift.tags=minimal rhel9, architecture=x86_64, com.redhat.component=ubi9-minimal-container, vcs-type=git, vendor=Red Hat, Inc., config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, name=ubi9-minimal)
Jan 05 21:12:39 compute-0 podman[242974]: 2026-01-05 21:12:39.801416388 +0000 UTC m=+0.146844248 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 05 21:12:41 compute-0 nova_compute[186018]: 2026-01-05 21:12:41.498 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:42 compute-0 nova_compute[186018]: 2026-01-05 21:12:42.056 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:42.846 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:12:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:42.846 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:12:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:12:42.847 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:12:44 compute-0 podman[243001]: 2026-01-05 21:12:44.784497639 +0000 UTC m=+0.111334042 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:12:44 compute-0 podman[243000]: 2026-01-05 21:12:44.811165521 +0000 UTC m=+0.146867207 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 21:12:46 compute-0 nova_compute[186018]: 2026-01-05 21:12:46.503 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:47 compute-0 nova_compute[186018]: 2026-01-05 21:12:47.058 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:51 compute-0 nova_compute[186018]: 2026-01-05 21:12:51.508 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:52 compute-0 nova_compute[186018]: 2026-01-05 21:12:52.061 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:52 compute-0 ovn_controller[98229]: 2026-01-05T21:12:52Z|00050|memory_trim|INFO|Detected inactivity (last active 30015 ms ago): trimming memory
Jan 05 21:12:53 compute-0 podman[243039]: 2026-01-05 21:12:53.768215611 +0000 UTC m=+0.112842472 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:12:56 compute-0 nova_compute[186018]: 2026-01-05 21:12:56.511 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:57 compute-0 nova_compute[186018]: 2026-01-05 21:12:57.063 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:12:57 compute-0 podman[243069]: 2026-01-05 21:12:57.763892155 +0000 UTC m=+0.110122820 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 05 21:12:57 compute-0 ovn_controller[98229]: 2026-01-05T21:12:57Z|00008|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:22:cf:e6 192.168.0.15
Jan 05 21:12:57 compute-0 ovn_controller[98229]: 2026-01-05T21:12:57Z|00009|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:22:cf:e6 192.168.0.15
Jan 05 21:12:58 compute-0 nova_compute[186018]: 2026-01-05 21:12:58.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:12:58 compute-0 nova_compute[186018]: 2026-01-05 21:12:58.464 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:12:59 compute-0 podman[202426]: time="2026-01-05T21:12:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:12:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:12:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:12:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:12:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4360 "" "Go-http-client/1.1"
Jan 05 21:13:00 compute-0 podman[243088]: 2026-01-05 21:13:00.787174592 +0000 UTC m=+0.121074389 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, name=ubi9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., managed_by=edpm_ansible, config_id=kepler, container_name=kepler, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.buildah.version=1.29.0, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., architecture=x86_64, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 05 21:13:01 compute-0 openstack_network_exporter[205720]: ERROR   21:13:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:13:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:13:01 compute-0 openstack_network_exporter[205720]: ERROR   21:13:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:13:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:13:01 compute-0 nova_compute[186018]: 2026-01-05 21:13:01.514 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:02 compute-0 nova_compute[186018]: 2026-01-05 21:13:02.068 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:02 compute-0 podman[243107]: 2026-01-05 21:13:02.750325271 +0000 UTC m=+0.098742640 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224)
Jan 05 21:13:03 compute-0 nova_compute[186018]: 2026-01-05 21:13:03.464 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:13:04 compute-0 nova_compute[186018]: 2026-01-05 21:13:04.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:13:04 compute-0 nova_compute[186018]: 2026-01-05 21:13:04.460 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:13:05 compute-0 nova_compute[186018]: 2026-01-05 21:13:05.285 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:13:05 compute-0 nova_compute[186018]: 2026-01-05 21:13:05.286 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:13:05 compute-0 nova_compute[186018]: 2026-01-05 21:13:05.287 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:13:06 compute-0 nova_compute[186018]: 2026-01-05 21:13:06.521 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:07 compute-0 nova_compute[186018]: 2026-01-05 21:13:07.070 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:07 compute-0 nova_compute[186018]: 2026-01-05 21:13:07.335 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Updating instance_info_cache with network_info: [{"id": "f3274143-07c8-4956-b27c-98507a2443b2", "address": "fa:16:3e:13:ee:71", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.216", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3274143-07", "ovs_interfaceid": "f3274143-07c8-4956-b27c-98507a2443b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:13:07 compute-0 nova_compute[186018]: 2026-01-05 21:13:07.365 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:13:07 compute-0 nova_compute[186018]: 2026-01-05 21:13:07.367 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:13:07 compute-0 nova_compute[186018]: 2026-01-05 21:13:07.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:13:07 compute-0 nova_compute[186018]: 2026-01-05 21:13:07.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.781 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.782 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.782 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.783 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.798 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd0894ce8-3815-41f8-a495-2329081a9ed2', 'name': 'vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {'metering.server_group': 'a6371b97-6a0c-4b37-9443-eaf5410da4a4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.808 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance bc5c255f-3071-4754-9c2a-302e6237171f from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 05 21:13:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:07.813 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/bc5c255f-3071-4754-9c2a-302e6237171f -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f276ecb8e60cef1797549a0d2bcc21ef3546f9ad65f5da0e31c0a93bf2cbb910" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 05 21:13:08 compute-0 nova_compute[186018]: 2026-01-05 21:13:08.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:13:08 compute-0 nova_compute[186018]: 2026-01-05 21:13:08.494 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:13:08 compute-0 nova_compute[186018]: 2026-01-05 21:13:08.496 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:13:08 compute-0 nova_compute[186018]: 2026-01-05 21:13:08.497 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:13:08 compute-0 nova_compute[186018]: 2026-01-05 21:13:08.498 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.525 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1959 Content-Type: application/json Date: Mon, 05 Jan 2026 21:13:07 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-ec6135b6-2e7f-4d7b-8442-572dab70aaa7 x-openstack-request-id: req-ec6135b6-2e7f-4d7b-8442-572dab70aaa7 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.525 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "bc5c255f-3071-4754-9c2a-302e6237171f", "name": "vn-ezpxu27-aposstbqe4u5-3vxh7p6lsvtd-vnf-iw64z6vmzv3z", "status": "ACTIVE", "tenant_id": "704814115a61471f9b45484171f67b5f", "user_id": "41f377b42540490198f271301cf5fe90", "metadata": {"metering.server_group": "a6371b97-6a0c-4b37-9443-eaf5410da4a4"}, "hostId": "cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424", "image": {"id": "31cf9c34-2e56-49e9-bb98-955ac3cf9185", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/31cf9c34-2e56-49e9-bb98-955ac3cf9185"}]}, "flavor": {"id": "d9d5992a-1c00-4233-a43d-71321ed82446", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/d9d5992a-1c00-4233-a43d-71321ed82446"}]}, "created": "2026-01-05T21:12:12Z", "updated": "2026-01-05T21:12:23Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.15", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:22:cf:e6"}, {"version": 4, "addr": "192.168.122.234", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:22:cf:e6"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/bc5c255f-3071-4754-9c2a-302e6237171f"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/bc5c255f-3071-4754-9c2a-302e6237171f"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-05T21:12:23.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000003", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.525 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/bc5c255f-3071-4754-9c2a-302e6237171f used request id req-ec6135b6-2e7f-4d7b-8442-572dab70aaa7 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.528 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bc5c255f-3071-4754-9c2a-302e6237171f', 'name': 'vn-ezpxu27-aposstbqe4u5-3vxh7p6lsvtd-vnf-iw64z6vmzv3z', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {'metering.server_group': 'a6371b97-6a0c-4b37-9443-eaf5410da4a4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.533 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f64de408-e6d1-4f7f-9f94-e20a4c83a87a', 'name': 'test_0', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.534 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.534 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.534 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.535 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.535 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:13:08.534891) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.537 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.537 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.538 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.538 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.538 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.538 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.539 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:13:08.538837) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.545 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.552 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for bc5c255f-3071-4754-9c2a-302e6237171f / tap2fb09e12-63 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.552 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.568 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets volume: 19 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.570 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.570 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.570 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.570 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.571 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.571 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.572 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.573 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:13:08.571026) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.574 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.574 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.574 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.574 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.575 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.575 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.575 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.packets volume: 41 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.576 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.576 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.577 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.578 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.579 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:13:08.575357) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.579 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.579 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.580 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.580 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:13:08.580201) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.583 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.584 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.584 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.584 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.584 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.585 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:13:08.584840) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.585 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.bytes volume: 4788 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.586 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.bytes volume: 1751 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.587 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes volume: 2272 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.588 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.588 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.589 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.590 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.590 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.591 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:13:08.589804) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.592 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.592 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.593 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.593 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.593 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.593 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.594 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-05T21:13:08.593752) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.593 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.594 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-ezpxu27-aposstbqe4u5-3vxh7p6lsvtd-vnf-iw64z6vmzv3z>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-ezpxu27-aposstbqe4u5-3vxh7p6lsvtd-vnf-iw64z6vmzv3z>]
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.595 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.595 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.595 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.596 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.596 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.596 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.597 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.597 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.599 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.599 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.599 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:13:08.596292) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.600 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.600 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.601 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.601 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.601 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:13:08.600988) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.601 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.602 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.602 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.602 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.602 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.603 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.603 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.603 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.603 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:13:08.603206) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 nova_compute[186018]: 2026-01-05 21:13:08.651 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.653 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.654 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.654 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.695 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.696 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.696 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.735 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.736 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.736 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.737 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.737 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.737 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.737 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.737 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.737 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.738 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.738 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.738 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:13:08.737661) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.738 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.739 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.739 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.739 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.739 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.739 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.739 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.739 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.740 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-ezpxu27-aposstbqe4u5-3vxh7p6lsvtd-vnf-iw64z6vmzv3z>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-ezpxu27-aposstbqe4u5-3vxh7p6lsvtd-vnf-iw64z6vmzv3z>]
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.740 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-05T21:13:08.739823) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.742 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.742 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.743 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.743 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.744 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.744 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:13:08.743923) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.744 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.745 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.746 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.747 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.747 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.747 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.747 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.747 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.748 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.748 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:13:08.747781) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 nova_compute[186018]: 2026-01-05 21:13:08.785 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:13:08 compute-0 nova_compute[186018]: 2026-01-05 21:13:08.789 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.794 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/memory.usage volume: 49.125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.845 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/memory.usage volume: 49.66796875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 nova_compute[186018]: 2026-01-05 21:13:08.875 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:13:08 compute-0 nova_compute[186018]: 2026-01-05 21:13:08.877 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.890 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.891 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.892 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.892 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.892 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.892 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.892 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.892 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.892 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.893 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.893 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.894 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.894 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.894 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.895 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.895 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.895 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.895 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.896 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.896 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.897 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.897 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.897 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.898 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.898 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.898 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.899 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.899 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.899 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.899 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:13:08.892449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.900 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:13:08.895016) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:08.901 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:13:08.899550) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:08 compute-0 nova_compute[186018]: 2026-01-05 21:13:08.944 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:13:08 compute-0 nova_compute[186018]: 2026-01-05 21:13:08.945 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.006 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.006 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.007 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.018 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.037 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.101 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.102 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.102 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.109 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.111 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.173 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.176 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.203 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.204 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.204 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.205 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.205 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.205 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.205 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.205 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.205 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.206 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.bytes volume: 4891 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.206 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.206 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes volume: 2052 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.207 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.207 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.207 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.207 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:13:09.205871) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.208 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.208 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.208 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.208 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.latency volume: 441838413 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.208 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.latency volume: 97302278 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.209 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.latency volume: 82890817 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.209 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.latency volume: 420422303 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.209 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.latency volume: 95348408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.210 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.latency volume: 83683963 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.210 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 488988741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.210 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 83667442 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.210 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 61020876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.211 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.211 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.212 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:13:09.208468) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.212 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.212 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.212 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.212 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.212 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.213 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.213 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.213 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.214 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.214 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.215 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.215 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:13:09.212481) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.215 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.216 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.217 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.217 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.217 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.218 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.218 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.218 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.218 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.219 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.219 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.220 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.220 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.221 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.221 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.222 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:13:09.218399) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.222 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.222 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.225 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.225 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.225 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.226 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.226 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.226 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.226 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.bytes volume: 41848832 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.226 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:13:09.226160) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.226 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.227 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.227 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.bytes volume: 41709568 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.227 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.228 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.228 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.228 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.228 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.229 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.229 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.229 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.229 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.229 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.230 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.230 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/cpu volume: 282280000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.230 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/cpu volume: 33780000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.230 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:13:09.230062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.230 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/cpu volume: 38230000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.231 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.231 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.231 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.232 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.232 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.232 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.232 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.latency volume: 1660248415 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.232 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.latency volume: 11989637 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.233 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.233 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:13:09.232475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.233 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.latency volume: 1159461157 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.233 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.latency volume: 12113149 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.233 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.233 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 1391100422 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.234 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 11839143 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.234 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.234 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.235 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.235 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.235 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.235 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.235 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.235 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.requests volume: 237 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.236 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.236 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.236 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.requests volume: 223 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.237 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.237 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.237 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.237 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.238 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.238 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.239 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:13:09.235704) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.239 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.239 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.239 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.240 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.241 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.242 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.247 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.247 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.247 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.247 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.247 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.247 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.247 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.247 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.248 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.248 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.248 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.248 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.248 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.248 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.248 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:13:09.248 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.305 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.325 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.390 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.391 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.472 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.474 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.537 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.539 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:13:09 compute-0 nova_compute[186018]: 2026-01-05 21:13:09.650 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.111s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:13:09 compute-0 podman[243164]: 2026-01-05 21:13:09.795516243 +0000 UTC m=+0.132958141 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, architecture=x86_64, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container)
Jan 05 21:13:10 compute-0 nova_compute[186018]: 2026-01-05 21:13:10.106 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:13:10 compute-0 nova_compute[186018]: 2026-01-05 21:13:10.108 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4894MB free_disk=72.37871170043945GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:13:10 compute-0 nova_compute[186018]: 2026-01-05 21:13:10.109 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:13:10 compute-0 nova_compute[186018]: 2026-01-05 21:13:10.109 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:13:10 compute-0 nova_compute[186018]: 2026-01-05 21:13:10.202 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:13:10 compute-0 nova_compute[186018]: 2026-01-05 21:13:10.202 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance d0894ce8-3815-41f8-a495-2329081a9ed2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:13:10 compute-0 nova_compute[186018]: 2026-01-05 21:13:10.203 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance bc5c255f-3071-4754-9c2a-302e6237171f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:13:10 compute-0 nova_compute[186018]: 2026-01-05 21:13:10.203 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:13:10 compute-0 nova_compute[186018]: 2026-01-05 21:13:10.203 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:13:10 compute-0 nova_compute[186018]: 2026-01-05 21:13:10.313 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:13:10 compute-0 nova_compute[186018]: 2026-01-05 21:13:10.333 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:13:10 compute-0 nova_compute[186018]: 2026-01-05 21:13:10.356 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:13:10 compute-0 nova_compute[186018]: 2026-01-05 21:13:10.356 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:13:10 compute-0 podman[243187]: 2026-01-05 21:13:10.881858099 +0000 UTC m=+0.219312194 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:13:11 compute-0 nova_compute[186018]: 2026-01-05 21:13:11.527 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:12 compute-0 nova_compute[186018]: 2026-01-05 21:13:12.075 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:12 compute-0 nova_compute[186018]: 2026-01-05 21:13:12.356 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:13:12 compute-0 nova_compute[186018]: 2026-01-05 21:13:12.357 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:13:12 compute-0 nova_compute[186018]: 2026-01-05 21:13:12.358 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:13:15 compute-0 podman[243214]: 2026-01-05 21:13:15.757708534 +0000 UTC m=+0.100780434 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 05 21:13:15 compute-0 podman[243215]: 2026-01-05 21:13:15.841341956 +0000 UTC m=+0.161782980 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:13:16 compute-0 nova_compute[186018]: 2026-01-05 21:13:16.533 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:17 compute-0 nova_compute[186018]: 2026-01-05 21:13:17.079 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:21 compute-0 nova_compute[186018]: 2026-01-05 21:13:21.539 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:22 compute-0 nova_compute[186018]: 2026-01-05 21:13:22.082 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:24 compute-0 podman[243258]: 2026-01-05 21:13:24.771157219 +0000 UTC m=+0.114939227 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:13:26 compute-0 nova_compute[186018]: 2026-01-05 21:13:26.544 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:27 compute-0 nova_compute[186018]: 2026-01-05 21:13:27.087 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:28 compute-0 podman[243282]: 2026-01-05 21:13:28.765880667 +0000 UTC m=+0.104171773 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:13:29 compute-0 podman[202426]: time="2026-01-05T21:13:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:13:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:13:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:13:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:13:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4353 "" "Go-http-client/1.1"
Jan 05 21:13:31 compute-0 openstack_network_exporter[205720]: ERROR   21:13:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:13:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:13:31 compute-0 openstack_network_exporter[205720]: ERROR   21:13:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:13:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:13:31 compute-0 nova_compute[186018]: 2026-01-05 21:13:31.547 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:31 compute-0 podman[243301]: 2026-01-05 21:13:31.770072102 +0000 UTC m=+0.108107807 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.openshift.expose-services=, managed_by=edpm_ansible, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., release-0.7.12=, architecture=x86_64, distribution-scope=public, release=1214.1726694543, vcs-type=git, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, vendor=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30)
Jan 05 21:13:32 compute-0 nova_compute[186018]: 2026-01-05 21:13:32.092 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:33 compute-0 podman[243321]: 2026-01-05 21:13:33.744396334 +0000 UTC m=+0.092688991 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Jan 05 21:13:36 compute-0 nova_compute[186018]: 2026-01-05 21:13:36.552 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:37 compute-0 nova_compute[186018]: 2026-01-05 21:13:37.095 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:40 compute-0 podman[243341]: 2026-01-05 21:13:40.743458911 +0000 UTC m=+0.089047645 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, release=1755695350, build-date=2025-08-20T13:12:41)
Jan 05 21:13:41 compute-0 nova_compute[186018]: 2026-01-05 21:13:41.556 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:41 compute-0 podman[243362]: 2026-01-05 21:13:41.854686424 +0000 UTC m=+0.184474217 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:13:42 compute-0 nova_compute[186018]: 2026-01-05 21:13:42.100 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:13:42.846 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:13:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:13:42.846 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:13:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:13:42.847 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:13:46 compute-0 nova_compute[186018]: 2026-01-05 21:13:46.562 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:46 compute-0 podman[243388]: 2026-01-05 21:13:46.757618651 +0000 UTC m=+0.090432522 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 21:13:46 compute-0 podman[243387]: 2026-01-05 21:13:46.803482018 +0000 UTC m=+0.142062720 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 21:13:47 compute-0 nova_compute[186018]: 2026-01-05 21:13:47.101 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:51 compute-0 nova_compute[186018]: 2026-01-05 21:13:51.565 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:52 compute-0 nova_compute[186018]: 2026-01-05 21:13:52.103 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:55 compute-0 podman[243429]: 2026-01-05 21:13:55.718282967 +0000 UTC m=+0.072561811 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:13:56 compute-0 nova_compute[186018]: 2026-01-05 21:13:56.570 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:57 compute-0 nova_compute[186018]: 2026-01-05 21:13:57.105 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:13:59 compute-0 nova_compute[186018]: 2026-01-05 21:13:59.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:13:59 compute-0 nova_compute[186018]: 2026-01-05 21:13:59.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:13:59 compute-0 podman[202426]: time="2026-01-05T21:13:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:13:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:13:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:13:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:13:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4355 "" "Go-http-client/1.1"
Jan 05 21:13:59 compute-0 podman[243452]: 2026-01-05 21:13:59.792881619 +0000 UTC m=+0.132665984 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 05 21:14:01 compute-0 openstack_network_exporter[205720]: ERROR   21:14:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:14:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:14:01 compute-0 openstack_network_exporter[205720]: ERROR   21:14:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:14:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:14:01 compute-0 nova_compute[186018]: 2026-01-05 21:14:01.573 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:02 compute-0 nova_compute[186018]: 2026-01-05 21:14:02.109 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:02 compute-0 podman[243471]: 2026-01-05 21:14:02.785588309 +0000 UTC m=+0.127805255 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, version=9.4, maintainer=Red Hat, Inc., io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release-0.7.12=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler)
Jan 05 21:14:04 compute-0 nova_compute[186018]: 2026-01-05 21:14:04.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:14:04 compute-0 podman[243491]: 2026-01-05 21:14:04.721403038 +0000 UTC m=+0.070393364 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224)
Jan 05 21:14:05 compute-0 nova_compute[186018]: 2026-01-05 21:14:05.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:14:05 compute-0 nova_compute[186018]: 2026-01-05 21:14:05.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:14:05 compute-0 nova_compute[186018]: 2026-01-05 21:14:05.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:14:05 compute-0 nova_compute[186018]: 2026-01-05 21:14:05.773 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:14:05 compute-0 nova_compute[186018]: 2026-01-05 21:14:05.774 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:14:05 compute-0 nova_compute[186018]: 2026-01-05 21:14:05.778 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:14:05 compute-0 nova_compute[186018]: 2026-01-05 21:14:05.781 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f64de408-e6d1-4f7f-9f94-e20a4c83a87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:14:06 compute-0 nova_compute[186018]: 2026-01-05 21:14:06.578 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:07 compute-0 nova_compute[186018]: 2026-01-05 21:14:07.111 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.044 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updating instance_info_cache with network_info: [{"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.064 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.065 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.489 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.490 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.490 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.491 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.586 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:09 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:09.618 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=6, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=5) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:14:09 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:09.619 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:14:09 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:09.620 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '6'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.621 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.656 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.657 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.743 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.748 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.815 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.818 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.880 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.890 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.972 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:09 compute-0 nova_compute[186018]: 2026-01-05 21:14:09.974 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:10 compute-0 nova_compute[186018]: 2026-01-05 21:14:10.066 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:10 compute-0 nova_compute[186018]: 2026-01-05 21:14:10.069 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:10 compute-0 nova_compute[186018]: 2026-01-05 21:14:10.164 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:10 compute-0 nova_compute[186018]: 2026-01-05 21:14:10.166 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:10 compute-0 nova_compute[186018]: 2026-01-05 21:14:10.266 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:10 compute-0 nova_compute[186018]: 2026-01-05 21:14:10.282 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:10 compute-0 nova_compute[186018]: 2026-01-05 21:14:10.384 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:10 compute-0 nova_compute[186018]: 2026-01-05 21:14:10.385 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:10 compute-0 nova_compute[186018]: 2026-01-05 21:14:10.441 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:10 compute-0 nova_compute[186018]: 2026-01-05 21:14:10.443 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:10 compute-0 nova_compute[186018]: 2026-01-05 21:14:10.527 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:10 compute-0 nova_compute[186018]: 2026-01-05 21:14:10.528 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:10 compute-0 nova_compute[186018]: 2026-01-05 21:14:10.618 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.090s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:11 compute-0 nova_compute[186018]: 2026-01-05 21:14:11.010 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:14:11 compute-0 nova_compute[186018]: 2026-01-05 21:14:11.012 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4908MB free_disk=72.37876892089844GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:14:11 compute-0 nova_compute[186018]: 2026-01-05 21:14:11.013 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:14:11 compute-0 nova_compute[186018]: 2026-01-05 21:14:11.014 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:14:11 compute-0 nova_compute[186018]: 2026-01-05 21:14:11.098 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:14:11 compute-0 nova_compute[186018]: 2026-01-05 21:14:11.099 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance d0894ce8-3815-41f8-a495-2329081a9ed2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:14:11 compute-0 nova_compute[186018]: 2026-01-05 21:14:11.100 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance bc5c255f-3071-4754-9c2a-302e6237171f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:14:11 compute-0 nova_compute[186018]: 2026-01-05 21:14:11.101 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:14:11 compute-0 nova_compute[186018]: 2026-01-05 21:14:11.101 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:14:11 compute-0 nova_compute[186018]: 2026-01-05 21:14:11.228 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:14:11 compute-0 nova_compute[186018]: 2026-01-05 21:14:11.251 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:14:11 compute-0 nova_compute[186018]: 2026-01-05 21:14:11.252 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:14:11 compute-0 nova_compute[186018]: 2026-01-05 21:14:11.252 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.238s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:14:11 compute-0 nova_compute[186018]: 2026-01-05 21:14:11.583 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:11 compute-0 podman[243545]: 2026-01-05 21:14:11.79587735 +0000 UTC m=+0.131888112 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, version=9.6, maintainer=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, release=1755695350, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, architecture=x86_64)
Jan 05 21:14:12 compute-0 nova_compute[186018]: 2026-01-05 21:14:12.116 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:12 compute-0 podman[243565]: 2026-01-05 21:14:12.83747802 +0000 UTC m=+0.175018888 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 21:14:14 compute-0 nova_compute[186018]: 2026-01-05 21:14:14.252 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:14:14 compute-0 nova_compute[186018]: 2026-01-05 21:14:14.252 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:14:14 compute-0 nova_compute[186018]: 2026-01-05 21:14:14.253 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:14:15 compute-0 nova_compute[186018]: 2026-01-05 21:14:15.456 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:14:16 compute-0 nova_compute[186018]: 2026-01-05 21:14:16.588 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:17 compute-0 nova_compute[186018]: 2026-01-05 21:14:17.119 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:17 compute-0 podman[243590]: 2026-01-05 21:14:17.759816219 +0000 UTC m=+0.111239109 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 21:14:17 compute-0 podman[243591]: 2026-01-05 21:14:17.781313295 +0000 UTC m=+0.125115515 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.235 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "4f980272-c18f-4c66-9c04-8a07a7115de7" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.236 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.261 186022 DEBUG nova.compute.manager [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.353 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.354 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.363 186022 DEBUG nova.virt.hardware [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.364 186022 INFO nova.compute.claims [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Claim successful on node compute-0.ctlplane.example.com
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.518 186022 DEBUG nova.compute.provider_tree [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.535 186022 DEBUG nova.scheduler.client.report [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.561 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.207s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.563 186022 DEBUG nova.compute.manager [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.622 186022 DEBUG nova.compute.manager [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.623 186022 DEBUG nova.network.neutron [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.664 186022 INFO nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.709 186022 DEBUG nova.compute.manager [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.822 186022 DEBUG nova.compute.manager [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.825 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.826 186022 INFO nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Creating image(s)
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.827 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "/var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.828 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.829 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.855 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.953 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.955 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.957 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:14:18 compute-0 nova_compute[186018]: 2026-01-05 21:14:18.982 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.064 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.066 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec,backing_fmt=raw /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.134 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec,backing_fmt=raw /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk 1073741824" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.136 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.138 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.217 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.220 186022 DEBUG nova.virt.disk.api [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Checking if we can resize image /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.221 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.290 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.292 186022 DEBUG nova.virt.disk.api [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Cannot resize image /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.293 186022 DEBUG nova.objects.instance [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lazy-loading 'migration_context' on Instance uuid 4f980272-c18f-4c66-9c04-8a07a7115de7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.314 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "/var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.314 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.315 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.329 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.388 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.389 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.390 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.403 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.467 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.469 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.538 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 1073741824" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.539 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.149s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.540 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.623 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.625 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.626 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Ensure instance console log exists: /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.627 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.627 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:14:19 compute-0 nova_compute[186018]: 2026-01-05 21:14:19.628 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:14:20 compute-0 nova_compute[186018]: 2026-01-05 21:14:20.512 186022 DEBUG nova.network.neutron [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Successfully updated port: 6fba2106-2ecf-47b1-ba86-3ca344528342 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 21:14:20 compute-0 nova_compute[186018]: 2026-01-05 21:14:20.535 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:14:20 compute-0 nova_compute[186018]: 2026-01-05 21:14:20.537 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquired lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:14:20 compute-0 nova_compute[186018]: 2026-01-05 21:14:20.538 186022 DEBUG nova.network.neutron [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 21:14:20 compute-0 nova_compute[186018]: 2026-01-05 21:14:20.618 186022 DEBUG nova.compute.manager [req-26888ac3-d068-46a6-aaee-260b9e1a3de5 req-0817d6ba-cbac-4580-9ab8-5b71d2b15ecb 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Received event network-changed-6fba2106-2ecf-47b1-ba86-3ca344528342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:14:20 compute-0 nova_compute[186018]: 2026-01-05 21:14:20.620 186022 DEBUG nova.compute.manager [req-26888ac3-d068-46a6-aaee-260b9e1a3de5 req-0817d6ba-cbac-4580-9ab8-5b71d2b15ecb 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Refreshing instance network info cache due to event network-changed-6fba2106-2ecf-47b1-ba86-3ca344528342. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:14:20 compute-0 nova_compute[186018]: 2026-01-05 21:14:20.621 186022 DEBUG oslo_concurrency.lockutils [req-26888ac3-d068-46a6-aaee-260b9e1a3de5 req-0817d6ba-cbac-4580-9ab8-5b71d2b15ecb 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:14:21 compute-0 nova_compute[186018]: 2026-01-05 21:14:21.407 186022 DEBUG nova.network.neutron [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 21:14:21 compute-0 nova_compute[186018]: 2026-01-05 21:14:21.592 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:22 compute-0 nova_compute[186018]: 2026-01-05 21:14:22.123 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.481 186022 DEBUG nova.network.neutron [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Updating instance_info_cache with network_info: [{"id": "6fba2106-2ecf-47b1-ba86-3ca344528342", "address": "fa:16:3e:71:37:b5", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fba2106-2e", "ovs_interfaceid": "6fba2106-2ecf-47b1-ba86-3ca344528342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.509 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Releasing lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.510 186022 DEBUG nova.compute.manager [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Instance network_info: |[{"id": "6fba2106-2ecf-47b1-ba86-3ca344528342", "address": "fa:16:3e:71:37:b5", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fba2106-2e", "ovs_interfaceid": "6fba2106-2ecf-47b1-ba86-3ca344528342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.512 186022 DEBUG oslo_concurrency.lockutils [req-26888ac3-d068-46a6-aaee-260b9e1a3de5 req-0817d6ba-cbac-4580-9ab8-5b71d2b15ecb 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.513 186022 DEBUG nova.network.neutron [req-26888ac3-d068-46a6-aaee-260b9e1a3de5 req-0817d6ba-cbac-4580-9ab8-5b71d2b15ecb 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Refreshing network info cache for port 6fba2106-2ecf-47b1-ba86-3ca344528342 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.520 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Start _get_guest_xml network_info=[{"id": "6fba2106-2ecf-47b1-ba86-3ca344528342", "address": "fa:16:3e:71:37:b5", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fba2106-2e", "ovs_interfaceid": "6fba2106-2ecf-47b1-ba86-3ca344528342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T21:05:05Z,direct_url=<?>,disk_format='qcow2',id=31cf9c34-2e56-49e9-bb98-955ac3cf9185,min_disk=0,min_ram=0,name='cirros',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T21:05:07Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'encrypted': False, 'encryption_format': None, 'image_id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}], 'ephemerals': [{'guest_format': None, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 1, 'encrypted': False, 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.546 186022 WARNING nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.556 186022 DEBUG nova.virt.libvirt.host [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.558 186022 DEBUG nova.virt.libvirt.host [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.566 186022 DEBUG nova.virt.libvirt.host [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.567 186022 DEBUG nova.virt.libvirt.host [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.569 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.570 186022 DEBUG nova.virt.hardware [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T21:05:10Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='d9d5992a-1c00-4233-a43d-71321ed82446',id=1,is_public=True,memory_mb=512,name='m1.small',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T21:05:05Z,direct_url=<?>,disk_format='qcow2',id=31cf9c34-2e56-49e9-bb98-955ac3cf9185,min_disk=0,min_ram=0,name='cirros',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T21:05:07Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.573 186022 DEBUG nova.virt.hardware [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.575 186022 DEBUG nova.virt.hardware [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.576 186022 DEBUG nova.virt.hardware [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.577 186022 DEBUG nova.virt.hardware [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.578 186022 DEBUG nova.virt.hardware [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.579 186022 DEBUG nova.virt.hardware [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.580 186022 DEBUG nova.virt.hardware [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.581 186022 DEBUG nova.virt.hardware [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.581 186022 DEBUG nova.virt.hardware [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.582 186022 DEBUG nova.virt.hardware [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.589 186022 DEBUG nova.virt.libvirt.vif [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:14:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak',id=4,image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a6371b97-6a0c-4b37-9443-eaf5410da4a4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='704814115a61471f9b45484171f67b5f',ramdisk_id='',reservation_id='r-jvficg90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:14:18Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMjI2Nzc4MDIzODAwNDE3Njg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTEyMjY3NzgwMjM4MDA0MTc2ODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTIyNjc3ODAyMzgwMDQxNzY4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTEyMjY3NzgwMjM4MDA0MTc2ODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMjI2Nzc4MDIzODAwNDE3Njg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMjI2Nzc4MDIzODAwNDE3Njg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvKCclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJnc
Jan 05 21:14:23 compute-0 nova_compute[186018]: ywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTIyNjc3ODAyMzgwMDQxNzY4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTEyMjY3NzgwMjM4MDA0MTc2ODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMjI2Nzc4MDIzODAwNDE3Njg4PT0tLQo=',user_id='41f377b42540490198f271301cf5fe90',uuid=4f980272-c18f-4c66-9c04-8a07a7115de7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6fba2106-2ecf-47b1-ba86-3ca344528342", "address": "fa:16:3e:71:37:b5", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fba2106-2e", "ovs_interfaceid": "6fba2106-2ecf-47b1-ba86-3ca344528342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.590 186022 DEBUG nova.network.os_vif_util [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converting VIF {"id": "6fba2106-2ecf-47b1-ba86-3ca344528342", "address": "fa:16:3e:71:37:b5", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fba2106-2e", "ovs_interfaceid": "6fba2106-2ecf-47b1-ba86-3ca344528342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.592 186022 DEBUG nova.network.os_vif_util [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:37:b5,bridge_name='br-int',has_traffic_filtering=True,id=6fba2106-2ecf-47b1-ba86-3ca344528342,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6fba2106-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.595 186022 DEBUG nova.objects.instance [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4f980272-c18f-4c66-9c04-8a07a7115de7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:14:23 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.632 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] End _get_guest_xml xml=<domain type="kvm">
Jan 05 21:14:23 compute-0 nova_compute[186018]:   <uuid>4f980272-c18f-4c66-9c04-8a07a7115de7</uuid>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   <name>instance-00000004</name>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   <memory>524288</memory>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   <vcpu>1</vcpu>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   <metadata>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <nova:name>vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak</nova:name>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <nova:creationTime>2026-01-05 21:14:23</nova:creationTime>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <nova:flavor name="m1.small">
Jan 05 21:14:23 compute-0 nova_compute[186018]:         <nova:memory>512</nova:memory>
Jan 05 21:14:23 compute-0 nova_compute[186018]:         <nova:disk>1</nova:disk>
Jan 05 21:14:23 compute-0 nova_compute[186018]:         <nova:swap>0</nova:swap>
Jan 05 21:14:23 compute-0 nova_compute[186018]:         <nova:ephemeral>1</nova:ephemeral>
Jan 05 21:14:23 compute-0 nova_compute[186018]:         <nova:vcpus>1</nova:vcpus>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       </nova:flavor>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <nova:owner>
Jan 05 21:14:23 compute-0 nova_compute[186018]:         <nova:user uuid="41f377b42540490198f271301cf5fe90">admin</nova:user>
Jan 05 21:14:23 compute-0 nova_compute[186018]:         <nova:project uuid="704814115a61471f9b45484171f67b5f">admin</nova:project>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       </nova:owner>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <nova:root type="image" uuid="31cf9c34-2e56-49e9-bb98-955ac3cf9185"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <nova:ports>
Jan 05 21:14:23 compute-0 nova_compute[186018]:         <nova:port uuid="6fba2106-2ecf-47b1-ba86-3ca344528342">
Jan 05 21:14:23 compute-0 nova_compute[186018]:           <nova:ip type="fixed" address="192.168.0.7" ipVersion="4"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:         </nova:port>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       </nova:ports>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     </nova:instance>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   </metadata>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   <sysinfo type="smbios">
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <system>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <entry name="manufacturer">RDO</entry>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <entry name="product">OpenStack Compute</entry>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <entry name="serial">4f980272-c18f-4c66-9c04-8a07a7115de7</entry>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <entry name="uuid">4f980272-c18f-4c66-9c04-8a07a7115de7</entry>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <entry name="family">Virtual Machine</entry>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     </system>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   </sysinfo>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   <os>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <boot dev="hd"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <smbios mode="sysinfo"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   </os>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   <features>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <acpi/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <apic/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <vmcoreinfo/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   </features>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   <clock offset="utc">
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <timer name="hpet" present="no"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   </clock>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   <cpu mode="host-model" match="exact">
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   </cpu>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   <devices>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <target dev="vda" bus="virtio"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <target dev="vdb" bus="virtio"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <disk type="file" device="cdrom">
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.config"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <target dev="sda" bus="sata"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <interface type="ethernet">
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <mac address="fa:16:3e:71:37:b5"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <mtu size="1442"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <target dev="tap6fba2106-2e"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     </interface>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <serial type="pty">
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <log file="/var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/console.log" append="off"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     </serial>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <video>
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     </video>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <input type="tablet" bus="usb"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <rng model="virtio">
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <backend model="random">/dev/urandom</backend>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     </rng>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <controller type="usb" index="0"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     <memballoon model="virtio">
Jan 05 21:14:23 compute-0 nova_compute[186018]:       <stats period="10"/>
Jan 05 21:14:23 compute-0 nova_compute[186018]:     </memballoon>
Jan 05 21:14:23 compute-0 nova_compute[186018]:   </devices>
Jan 05 21:14:23 compute-0 nova_compute[186018]: </domain>
Jan 05 21:14:23 compute-0 nova_compute[186018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.645 186022 DEBUG nova.compute.manager [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Preparing to wait for external event network-vif-plugged-6fba2106-2ecf-47b1-ba86-3ca344528342 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.646 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.646 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.646 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.647 186022 DEBUG nova.virt.libvirt.vif [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:14:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak',ec2_ids=EC2Ids,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak',id=4,image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a6371b97-6a0c-4b37-9443-eaf5410da4a4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='704814115a61471f9b45484171f67b5f',ramdisk_id='',reservation_id='r-jvficg90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',network_allocated='True',owner_project_name='admin',owner_user_name='admin'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:14:18Z,user_data='Content-Type: multipart/mixed; boundary="===============1226778023800417688=="
MIME-Version: 1.0

--===============1226778023800417688==
Content-Type: text/cloud-config; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cloud-config"



# Capture all subprocess output into a logfile
# Useful for troubleshooting cloud-init issues
output: {all: '| tee -a /var/log/cloud-init-output.log'}

--===============1226778023800417688==
Content-Type: text/cloud-boothook; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="boothook.sh"

#!/usr/bin/bash

# FIXME(shadower) this is a workaround for cloud-init 0.6.3 present in Ubuntu
# 12.04 LTS:
# https://bugs.launchpad.net/heat/+bug/1257410
#
# The old cloud-init doesn't create the users directly so the commands to do
# this are injected though nova_utils.py.
#
# Once we drop support for 0.6.3, we can safely remove this.


# in case heat-cfntools has been installed from package but no symlinks
# are yet in /opt/aws/bin/
cfn-create-aws-symlinks

# Do not remove - the cloud boothook should always return success
exit 0

--===============1226778023800417688==
Content-Type: text/part-handler; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="part-handler.py"

# part-handler
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import os
import sys


def list_types():
    return ["text/x-cfninitdata"]


def handle_part(data, ctype, filename, payload):
    if ctype == "__begin__":
        try:
            os.makedirs('/var/lib/heat-cfntools', int("700", 8))
        except OSError:
            ex_type, e, tb = sys.exc_info()
            if e.errno != errno.EEXIST:
                raise
        return

    if ctype == "__end__":
        return

    timestamp = datetime.datetime.now()
    with open('/var/log/part-handler.log', 'a') as log:
        log.write('%s filename:%s, ctype:%s\n' % (timestamp, filename, ctype))

    if ctype == 'text/x-cfninitdata':
        with open('/var/lib/heat-cfntools/%s' % filename, 'w') as f:
            f.write(payload)

        # TODO(sdake) hopefully temporary until users move to heat-cfntools-1.3
        with open('/var/lib/cloud/data/%s' % filename, 'w') as f:
            f.write(payload)

--===============1226778023800417688==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-userdata"


--===============1226778023800417688==
Content-Type: text/x-shellscript; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="loguserdata.py"

#!/usr/bin/env python3
#
#    Licensed under the Apache License, Version 2.0 (the "License"); you may
#    not use this file except in compliance with the License. You may obtain
#    a copy of the License at
#
#         http://www.apache.org/licenses/LICENSE-2.0
#
#    Unless required by applicable law or agreed to in writing, software
#    distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
#    WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
#    License for the specific language governing permissions and limitations
#    under the License.

import datetime
import errno
import logging
import os
import subprocess
import sys


VAR_PATH = '/var/lib/heat-cfntools'
LOG = logging.getLogger('heat-provision')


def init_logging():
    LOG.setLevel(logging.INFO)
    LOG.addHandler(logging.StreamHandler())
    fh = logging.FileHandler("/var/log/heat-provision.log")
    os.chmod(fh.baseFilename, int("600", 8))
    LOG.addHandler(fh)


def call(args):

    class LogStream(object):

        def write(self, data):
            LOG.info(data)

    LOG.info('%s\n', ' '.join(args))  # noqa
    try:
        ls = LogStream()
        p = subprocess.Popen(args, stdout=subprocess.PIPE,
                             stderr=subprocess.PIPE)
        data = p.communicate()
        if data:
            for x in data:
                ls.write(x)
    except OSError:
        ex_type, ex, tb = sys.exc_info()
        if ex.errno == errno.ENOEXEC:
            LOG.error('Userdata empty or not executable: %s', ex)
            return os.EX_OK
        else:
            LOG.error('OS error running userdata: %s', ex)
            return os.EX_OSERR
    except Exception:
        ex_type, ex, tb = sys.exc_info()
        LOG.error('Unknown error running userdata: %s', ex)
        return os.EX_SOFTWARE
    return p.returncode


def main():
    userdata_path = os.path.join(VAR_PATH, 'cfn-userdata')
    os.chmod(userdata_path, int("700", 8))

    LOG.info('Provision began: %s', datetime.datetime.now())
    returncode = call([userdata_path])
    LOG.info('Provision done: %s', datetime.datetime.now())
    if returncode:
        return returncode


if __name__ == '__main__':
    init_logging()

    code = main()
    if code:
        LOG.error('Provision failed with exit code %s', code)
        sys.exit(code)

    provision_log = os.path.join(VAR_PATH, 'provision-finished')
    # touch the file so it is timestamped with when finished
    with open(provision_log, 'a'):
        os.utime(provision_log, None)

--===============1226778023800417688==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-metadata-server"

https://heat-cfnapi-internal.openstack.svc:8000/v1/
--===============1226778023800417688==
Content-Type: text/x-cfninitdata; charset="us-ascii"
MIME-Version: 1.0
Content-Transfer-Encoding: 7bit
Content-Disposition: attachment; filename="cfn-boto-cfg"

[Boto]
debug = 0
is_secure = 0
https_validate_certificates = 1
cfn_region_name = heat
cfn_region_endpoint = heat-cfnapi-internal.openstack.svc
--===============1226778023800417688==--
',user_id='41f377b42540490198f271301cf5fe90',uuid=4f980272-c18f-4c66-9c04-8a07a7115de7,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "6fba2106-2ecf-47b1-ba86-3ca344528342", "address": "fa:16:3e:71:37:b5", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fba2106-2e", "ovs_interfaceid": "6fba2106-2ecf-47b1-ba86-3ca344528342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.648 186022 DEBUG nova.network.os_vif_util [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converting VIF {"id": "6fba2106-2ecf-47b1-ba86-3ca344528342", "address": "fa:16:3e:71:37:b5", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fba2106-2e", "ovs_interfaceid": "6fba2106-2ecf-47b1-ba86-3ca344528342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.648 186022 DEBUG nova.network.os_vif_util [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:71:37:b5,bridge_name='br-int',has_traffic_filtering=True,id=6fba2106-2ecf-47b1-ba86-3ca344528342,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6fba2106-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.649 186022 DEBUG os_vif [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:37:b5,bridge_name='br-int',has_traffic_filtering=True,id=6fba2106-2ecf-47b1-ba86-3ca344528342,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6fba2106-2e') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.650 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.652 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.653 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.657 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.658 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap6fba2106-2e, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.659 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap6fba2106-2e, col_values=(('external_ids', {'iface-id': '6fba2106-2ecf-47b1-ba86-3ca344528342', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:71:37:b5', 'vm-uuid': '4f980272-c18f-4c66-9c04-8a07a7115de7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.662 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:23 compute-0 NetworkManager[56598]: <info>  [1767647663.6644] manager: (tap6fba2106-2e): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/31)
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.664 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.672 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.674 186022 INFO os_vif [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:71:37:b5,bridge_name='br-int',has_traffic_filtering=True,id=6fba2106-2ecf-47b1-ba86-3ca344528342,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6fba2106-2e')
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.748 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.749 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.750 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.750 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No VIF found with MAC fa:16:3e:71:37:b5, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 21:14:23 compute-0 nova_compute[186018]: 2026-01-05 21:14:23.751 186022 INFO nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Using config drive
Jan 05 21:14:23 compute-0 rsyslogd[237695]: message too long (8192) with configured size 8096, begin of message is: 2026-01-05 21:14:23.589 186022 DEBUG nova.virt.libvirt.vif [None req-390a760a-e8 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:14:24 compute-0 nova_compute[186018]: 2026-01-05 21:14:24.499 186022 INFO nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Creating config drive at /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.config
Jan 05 21:14:24 compute-0 nova_compute[186018]: 2026-01-05 21:14:24.515 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptdyjhkz0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:14:24 compute-0 nova_compute[186018]: 2026-01-05 21:14:24.669 186022 DEBUG oslo_concurrency.processutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmptdyjhkz0" returned: 0 in 0.154s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:14:24 compute-0 kernel: tap6fba2106-2e: entered promiscuous mode
Jan 05 21:14:24 compute-0 ovn_controller[98229]: 2026-01-05T21:14:24Z|00051|binding|INFO|Claiming lport 6fba2106-2ecf-47b1-ba86-3ca344528342 for this chassis.
Jan 05 21:14:24 compute-0 ovn_controller[98229]: 2026-01-05T21:14:24Z|00052|binding|INFO|6fba2106-2ecf-47b1-ba86-3ca344528342: Claiming fa:16:3e:71:37:b5 192.168.0.7
Jan 05 21:14:24 compute-0 NetworkManager[56598]: <info>  [1767647664.8208] manager: (tap6fba2106-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/32)
Jan 05 21:14:24 compute-0 nova_compute[186018]: 2026-01-05 21:14:24.821 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:24 compute-0 nova_compute[186018]: 2026-01-05 21:14:24.825 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:24 compute-0 nova_compute[186018]: 2026-01-05 21:14:24.838 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:24 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:24.848 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:37:b5 192.168.0.7'], port_security=['fa:16:3e:71:37:b5 192.168.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-3m37qezpxu27-ozi7dsf63p6s-yfrgspb44fvx-port-z3a4cfes3len', 'neutron:cidrs': '192.168.0.7/24', 'neutron:device_id': '4f980272-c18f-4c66-9c04-8a07a7115de7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-3m37qezpxu27-ozi7dsf63p6s-yfrgspb44fvx-port-z3a4cfes3len', 'neutron:project_id': '704814115a61471f9b45484171f67b5f', 'neutron:revision_number': '2', 'neutron:security_group_ids': '02c7eb5a-98f1-49fb-80bc-9ee05faa964b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.208'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0df9bc1d-5579-4059-ac66-a97b4c7350db, chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=6fba2106-2ecf-47b1-ba86-3ca344528342) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:14:24 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:24.850 107689 INFO neutron.agent.ovn.metadata.agent [-] Port 6fba2106-2ecf-47b1-ba86-3ca344528342 in datapath b871481f-0445-42f2-8b6a-2e8572ae5b49 bound to our chassis
Jan 05 21:14:24 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:24.853 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b871481f-0445-42f2-8b6a-2e8572ae5b49
Jan 05 21:14:24 compute-0 ovn_controller[98229]: 2026-01-05T21:14:24Z|00053|binding|INFO|Setting lport 6fba2106-2ecf-47b1-ba86-3ca344528342 ovn-installed in OVS
Jan 05 21:14:24 compute-0 ovn_controller[98229]: 2026-01-05T21:14:24Z|00054|binding|INFO|Setting lport 6fba2106-2ecf-47b1-ba86-3ca344528342 up in Southbound
Jan 05 21:14:24 compute-0 systemd-udevd[243681]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:14:24 compute-0 nova_compute[186018]: 2026-01-05 21:14:24.862 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:24 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:24.877 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[86b82811-d81d-4fc4-9009-d2e4c414ba2f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:14:24 compute-0 NetworkManager[56598]: <info>  [1767647664.8867] device (tap6fba2106-2e): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 21:14:24 compute-0 systemd-machined[157312]: New machine qemu-4-instance-00000004.
Jan 05 21:14:24 compute-0 NetworkManager[56598]: <info>  [1767647664.8968] device (tap6fba2106-2e): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 21:14:24 compute-0 systemd[1]: Started Virtual Machine qemu-4-instance-00000004.
Jan 05 21:14:24 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:24.919 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[df9bf01f-3288-48da-809e-a2429f84a25e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:14:24 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:24.924 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[7ae3d93f-3baf-44f5-bb1e-56e97de24b23]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:14:24 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:24.949 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[90520e21-c0cd-43f6-a751-d3b33e8c4d51]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:14:24 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:24.981 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[af6b33f4-f0d1-44d4-b5d9-ddfee631f957]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb871481f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:f0:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 6, 'tx_packets': 9, 'rx_bytes': 532, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393151, 'reachable_time': 16123, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 243696, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:14:25 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:25.001 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[7e6bdc97-9696-45e8-a2a9-4e8b436009e4]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb871481f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393170, 'tstamp': 393170}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243698, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapb871481f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393175, 'tstamp': 393175}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 243698, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:14:25 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:25.003 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb871481f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.005 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.006 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:25 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:25.006 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb871481f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:14:25 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:25.007 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:14:25 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:25.007 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb871481f-00, col_values=(('external_ids', {'iface-id': 'a16ac18f-2e71-4427-b368-840ecfba3d33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:14:25 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:25.008 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.189 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767647665.188512, 4f980272-c18f-4c66-9c04-8a07a7115de7 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.190 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] VM Started (Lifecycle Event)
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.207 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.214 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767647665.188852, 4f980272-c18f-4c66-9c04-8a07a7115de7 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.214 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] VM Paused (Lifecycle Event)
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.237 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.244 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.264 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.776 186022 DEBUG nova.compute.manager [req-d2c5a9d1-7e04-4c41-b720-97168e30a4fa req-64e8e223-4786-4c3d-8c7d-2134f2ea5dd8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Received event network-vif-plugged-6fba2106-2ecf-47b1-ba86-3ca344528342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.777 186022 DEBUG oslo_concurrency.lockutils [req-d2c5a9d1-7e04-4c41-b720-97168e30a4fa req-64e8e223-4786-4c3d-8c7d-2134f2ea5dd8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.777 186022 DEBUG oslo_concurrency.lockutils [req-d2c5a9d1-7e04-4c41-b720-97168e30a4fa req-64e8e223-4786-4c3d-8c7d-2134f2ea5dd8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.778 186022 DEBUG oslo_concurrency.lockutils [req-d2c5a9d1-7e04-4c41-b720-97168e30a4fa req-64e8e223-4786-4c3d-8c7d-2134f2ea5dd8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.778 186022 DEBUG nova.compute.manager [req-d2c5a9d1-7e04-4c41-b720-97168e30a4fa req-64e8e223-4786-4c3d-8c7d-2134f2ea5dd8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Processing event network-vif-plugged-6fba2106-2ecf-47b1-ba86-3ca344528342 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.780 186022 DEBUG nova.compute.manager [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.786 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767647665.7860208, 4f980272-c18f-4c66-9c04-8a07a7115de7 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.787 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] VM Resumed (Lifecycle Event)
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.791 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.801 186022 INFO nova.virt.libvirt.driver [-] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Instance spawned successfully.
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.801 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.818 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.830 186022 DEBUG nova.network.neutron [req-26888ac3-d068-46a6-aaee-260b9e1a3de5 req-0817d6ba-cbac-4580-9ab8-5b71d2b15ecb 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Updated VIF entry in instance network info cache for port 6fba2106-2ecf-47b1-ba86-3ca344528342. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.832 186022 DEBUG nova.network.neutron [req-26888ac3-d068-46a6-aaee-260b9e1a3de5 req-0817d6ba-cbac-4580-9ab8-5b71d2b15ecb 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Updating instance_info_cache with network_info: [{"id": "6fba2106-2ecf-47b1-ba86-3ca344528342", "address": "fa:16:3e:71:37:b5", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fba2106-2e", "ovs_interfaceid": "6fba2106-2ecf-47b1-ba86-3ca344528342", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.851 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.862 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.862 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.863 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.864 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.865 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.866 186022 DEBUG nova.virt.libvirt.driver [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.886 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.895 186022 DEBUG oslo_concurrency.lockutils [req-26888ac3-d068-46a6-aaee-260b9e1a3de5 req-0817d6ba-cbac-4580-9ab8-5b71d2b15ecb 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.931 186022 INFO nova.compute.manager [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Took 7.11 seconds to spawn the instance on the hypervisor.
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.932 186022 DEBUG nova.compute.manager [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:14:25 compute-0 nova_compute[186018]: 2026-01-05 21:14:25.993 186022 INFO nova.compute.manager [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Took 7.67 seconds to build instance.
Jan 05 21:14:26 compute-0 nova_compute[186018]: 2026-01-05 21:14:26.012 186022 DEBUG oslo_concurrency.lockutils [None req-390a760a-e891-40a2-b3ca-20e9cfbbb84e 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 7.775s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:14:26 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 05 21:14:26 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 05 21:14:26 compute-0 podman[243706]: 2026-01-05 21:14:26.526046325 +0000 UTC m=+0.098237407 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:14:27 compute-0 nova_compute[186018]: 2026-01-05 21:14:27.129 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:27 compute-0 nova_compute[186018]: 2026-01-05 21:14:27.893 186022 DEBUG nova.compute.manager [req-7916877d-25ce-4ced-afc7-0c1068e29416 req-37f066b6-8943-4226-a69b-d270a1f92cce 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Received event network-vif-plugged-6fba2106-2ecf-47b1-ba86-3ca344528342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:14:27 compute-0 nova_compute[186018]: 2026-01-05 21:14:27.893 186022 DEBUG oslo_concurrency.lockutils [req-7916877d-25ce-4ced-afc7-0c1068e29416 req-37f066b6-8943-4226-a69b-d270a1f92cce 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:14:27 compute-0 nova_compute[186018]: 2026-01-05 21:14:27.894 186022 DEBUG oslo_concurrency.lockutils [req-7916877d-25ce-4ced-afc7-0c1068e29416 req-37f066b6-8943-4226-a69b-d270a1f92cce 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:14:27 compute-0 nova_compute[186018]: 2026-01-05 21:14:27.895 186022 DEBUG oslo_concurrency.lockutils [req-7916877d-25ce-4ced-afc7-0c1068e29416 req-37f066b6-8943-4226-a69b-d270a1f92cce 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:14:27 compute-0 nova_compute[186018]: 2026-01-05 21:14:27.895 186022 DEBUG nova.compute.manager [req-7916877d-25ce-4ced-afc7-0c1068e29416 req-37f066b6-8943-4226-a69b-d270a1f92cce 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] No waiting events found dispatching network-vif-plugged-6fba2106-2ecf-47b1-ba86-3ca344528342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:14:27 compute-0 nova_compute[186018]: 2026-01-05 21:14:27.896 186022 WARNING nova.compute.manager [req-7916877d-25ce-4ced-afc7-0c1068e29416 req-37f066b6-8943-4226-a69b-d270a1f92cce 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Received unexpected event network-vif-plugged-6fba2106-2ecf-47b1-ba86-3ca344528342 for instance with vm_state active and task_state None.
Jan 05 21:14:28 compute-0 nova_compute[186018]: 2026-01-05 21:14:28.663 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:29 compute-0 podman[202426]: time="2026-01-05T21:14:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:14:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:14:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:14:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:14:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4360 "" "Go-http-client/1.1"
Jan 05 21:14:30 compute-0 podman[243749]: 2026-01-05 21:14:30.798828714 +0000 UTC m=+0.131231845 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 05 21:14:31 compute-0 openstack_network_exporter[205720]: ERROR   21:14:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:14:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:14:31 compute-0 openstack_network_exporter[205720]: ERROR   21:14:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:14:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:14:32 compute-0 nova_compute[186018]: 2026-01-05 21:14:32.131 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:33 compute-0 nova_compute[186018]: 2026-01-05 21:14:33.664 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:33 compute-0 podman[243767]: 2026-01-05 21:14:33.729135963 +0000 UTC m=+0.085464991 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, release=1214.1726694543, vendor=Red Hat, Inc., architecture=x86_64, config_id=kepler, release-0.7.12=, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, name=ubi9, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible)
Jan 05 21:14:35 compute-0 podman[243787]: 2026-01-05 21:14:35.782691642 +0000 UTC m=+0.121666044 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Jan 05 21:14:37 compute-0 nova_compute[186018]: 2026-01-05 21:14:37.133 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:38 compute-0 nova_compute[186018]: 2026-01-05 21:14:38.667 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:42 compute-0 nova_compute[186018]: 2026-01-05 21:14:42.136 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:42 compute-0 podman[243806]: 2026-01-05 21:14:42.786670397 +0000 UTC m=+0.125198247 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, io.buildah.version=1.33.7, version=9.6, vcs-type=git, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, release=1755695350, com.redhat.component=ubi9-minimal-container)
Jan 05 21:14:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:42.847 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:14:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:42.848 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:14:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:14:42.849 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:14:43 compute-0 nova_compute[186018]: 2026-01-05 21:14:43.671 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:43 compute-0 podman[243826]: 2026-01-05 21:14:43.812986003 +0000 UTC m=+0.162004525 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 05 21:14:47 compute-0 nova_compute[186018]: 2026-01-05 21:14:47.140 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:48 compute-0 nova_compute[186018]: 2026-01-05 21:14:48.677 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:48 compute-0 podman[243851]: 2026-01-05 21:14:48.786963201 +0000 UTC m=+0.126448929 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 05 21:14:48 compute-0 podman[243852]: 2026-01-05 21:14:48.793129484 +0000 UTC m=+0.118293345 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:14:52 compute-0 nova_compute[186018]: 2026-01-05 21:14:52.142 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:53 compute-0 nova_compute[186018]: 2026-01-05 21:14:53.681 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:54 compute-0 ovn_controller[98229]: 2026-01-05T21:14:54Z|00055|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 05 21:14:56 compute-0 podman[243896]: 2026-01-05 21:14:56.826475178 +0000 UTC m=+0.159077599 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 21:14:57 compute-0 nova_compute[186018]: 2026-01-05 21:14:57.145 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:58 compute-0 ovn_controller[98229]: 2026-01-05T21:14:58Z|00010|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:71:37:b5 192.168.0.7
Jan 05 21:14:58 compute-0 ovn_controller[98229]: 2026-01-05T21:14:58Z|00011|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:71:37:b5 192.168.0.7
Jan 05 21:14:58 compute-0 nova_compute[186018]: 2026-01-05 21:14:58.685 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:14:59 compute-0 podman[202426]: time="2026-01-05T21:14:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:14:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:14:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:14:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:14:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4356 "" "Go-http-client/1.1"
Jan 05 21:15:00 compute-0 nova_compute[186018]: 2026-01-05 21:15:00.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:15:00 compute-0 nova_compute[186018]: 2026-01-05 21:15:00.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:15:01 compute-0 openstack_network_exporter[205720]: ERROR   21:15:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:15:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:15:01 compute-0 openstack_network_exporter[205720]: ERROR   21:15:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:15:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:15:01 compute-0 podman[243929]: 2026-01-05 21:15:01.77294015 +0000 UTC m=+0.101780591 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 05 21:15:02 compute-0 nova_compute[186018]: 2026-01-05 21:15:02.149 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:03 compute-0 nova_compute[186018]: 2026-01-05 21:15:03.688 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:04 compute-0 nova_compute[186018]: 2026-01-05 21:15:04.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:15:04 compute-0 podman[243947]: 2026-01-05 21:15:04.760027254 +0000 UTC m=+0.106575867 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, maintainer=Red Hat, Inc., name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.openshift.expose-services=, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, vendor=Red Hat, Inc., version=9.4, container_name=kepler, com.redhat.component=ubi9-container)
Jan 05 21:15:05 compute-0 nova_compute[186018]: 2026-01-05 21:15:05.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:15:05 compute-0 nova_compute[186018]: 2026-01-05 21:15:05.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:15:05 compute-0 nova_compute[186018]: 2026-01-05 21:15:05.739 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:15:05 compute-0 nova_compute[186018]: 2026-01-05 21:15:05.739 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:15:05 compute-0 nova_compute[186018]: 2026-01-05 21:15:05.740 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:15:06 compute-0 podman[243968]: 2026-01-05 21:15:06.764905672 +0000 UTC m=+0.111245370 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 05 21:15:07 compute-0 nova_compute[186018]: 2026-01-05 21:15:07.153 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.782 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.783 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.786 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.796 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 4f980272-c18f-4c66-9c04-8a07a7115de7 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 05 21:15:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:07.799 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/4f980272-c18f-4c66-9c04-8a07a7115de7 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f276ecb8e60cef1797549a0d2bcc21ef3546f9ad65f5da0e31c0a93bf2cbb910" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.220 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1958 Content-Type: application/json Date: Mon, 05 Jan 2026 21:15:07 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-635189cd-3e6e-420d-b0d3-8acd2c2313e7 x-openstack-request-id: req-635189cd-3e6e-420d-b0d3-8acd2c2313e7 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.221 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "4f980272-c18f-4c66-9c04-8a07a7115de7", "name": "vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak", "status": "ACTIVE", "tenant_id": "704814115a61471f9b45484171f67b5f", "user_id": "41f377b42540490198f271301cf5fe90", "metadata": {"metering.server_group": "a6371b97-6a0c-4b37-9443-eaf5410da4a4"}, "hostId": "cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424", "image": {"id": "31cf9c34-2e56-49e9-bb98-955ac3cf9185", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/31cf9c34-2e56-49e9-bb98-955ac3cf9185"}]}, "flavor": {"id": "d9d5992a-1c00-4233-a43d-71321ed82446", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/d9d5992a-1c00-4233-a43d-71321ed82446"}]}, "created": "2026-01-05T21:14:16Z", "updated": "2026-01-05T21:14:25Z", "addresses": {"private": [{"version": 4, "addr": "192.168.0.7", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:71:37:b5"}, {"version": 4, "addr": "192.168.122.208", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:71:37:b5"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/4f980272-c18f-4c66-9c04-8a07a7115de7"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/4f980272-c18f-4c66-9c04-8a07a7115de7"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-05T21:14:25.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "basic"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000004", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.221 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/4f980272-c18f-4c66-9c04-8a07a7115de7 used request id req-635189cd-3e6e-420d-b0d3-8acd2c2313e7 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.224 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4f980272-c18f-4c66-9c04-8a07a7115de7', 'name': 'vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {'metering.server_group': 'a6371b97-6a0c-4b37-9443-eaf5410da4a4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.229 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd0894ce8-3815-41f8-a495-2329081a9ed2', 'name': 'vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {'metering.server_group': 'a6371b97-6a0c-4b37-9443-eaf5410da4a4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.235 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bc5c255f-3071-4754-9c2a-302e6237171f', 'name': 'vn-ezpxu27-aposstbqe4u5-3vxh7p6lsvtd-vnf-iw64z6vmzv3z', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {'metering.server_group': 'a6371b97-6a0c-4b37-9443-eaf5410da4a4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.241 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f64de408-e6d1-4f7f-9f94-e20a4c83a87a', 'name': 'test_0', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.242 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.242 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.243 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.244 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.245 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:15:08.243941) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.247 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.247 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.248 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.248 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.249 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.249 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.250 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:15:08.249663) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.257 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 4f980272-c18f-4c66-9c04-8a07a7115de7 / tap6fba2106-2e inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.258 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.268 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.packets volume: 53 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 nova_compute[186018]: 2026-01-05 21:15:08.269 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Updating instance_info_cache with network_info: [{"id": "f3274143-07c8-4956-b27c-98507a2443b2", "address": "fa:16:3e:13:ee:71", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.216", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3274143-07", "ovs_interfaceid": "f3274143-07c8-4956-b27c-98507a2443b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.277 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.285 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 nova_compute[186018]: 2026-01-05 21:15:08.285 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:15:08 compute-0 nova_compute[186018]: 2026-01-05 21:15:08.285 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.287 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.287 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.288 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.288 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.289 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:15:08.288641) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.288 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.290 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.291 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.292 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.293 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.294 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.295 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.295 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.295 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.295 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.296 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.packets volume: 13 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.296 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.packets volume: 64 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.297 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.297 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:15:08.295870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.299 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.299 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.299 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.300 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.300 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.301 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.302 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.302 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:15:08.300368) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.303 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.303 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.bytes volume: 1666 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.304 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.bytes volume: 7460 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.304 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.305 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.305 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:15:08.303456) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.306 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.306 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.307 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.307 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.307 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.308 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.308 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.bytes.delta volume: 2672 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.309 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.bytes.delta volume: 535 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.309 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.310 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:15:08.307793) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.310 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.311 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.311 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.312 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-05T21:15:08.311806) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.312 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak>]
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.432 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.432 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.432 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.432 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.433 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.433 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.433 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.433 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.434 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.434 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.434 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.434 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.434 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.434 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.435 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.435 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.435 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.435 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.436 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.436 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.436 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.436 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:15:08.432486) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.436 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:15:08.434316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.437 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:15:08.436450) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.479 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.480 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.481 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.533 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.534 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.534 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.578 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.579 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.579 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.624 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.625 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.626 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.627 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.627 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.627 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.627 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.627 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.627 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.628 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.628 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.bytes.delta volume: 3431 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.628 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.629 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.629 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.630 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.630 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.630 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.630 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.630 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.630 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.630 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak>]
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.631 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.631 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.631 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.631 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.631 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.631 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.632 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.632 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.632 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.633 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.633 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.633 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.633 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.633 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.633 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.634 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:15:08.627898) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.634 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-05T21:15:08.630707) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.635 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:15:08.631761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.635 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:15:08.633791) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.677 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/memory.usage volume: 49.61328125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 nova_compute[186018]: 2026-01-05 21:15:08.692 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.715 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/memory.usage volume: 48.98046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.765 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.810 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.811 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.812 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.812 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.812 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.812 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.812 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.813 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.813 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.814 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.814 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.815 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:15:08.812900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.815 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.815 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.816 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.816 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.816 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.816 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.816 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.817 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.817 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:15:08.816589) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.818 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.818 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.818 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.819 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.819 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.820 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.820 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.820 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.820 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.821 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.822 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.822 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.822 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.822 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.822 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.822 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:15:08.822827) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.943 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.944 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:08.944 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.044 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.045 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.047 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.169 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.170 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.171 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.281 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.282 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.282 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.284 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.284 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.285 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.285 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.285 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.286 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:15:09.285666) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.286 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.287 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.bytes volume: 8322 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.288 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.288 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.289 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.290 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.290 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.290 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.291 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.292 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:15:09.291324) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.291 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.292 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.latency volume: 461858933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.293 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.latency volume: 95970893 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.293 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.latency volume: 69940491 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.294 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.latency volume: 441838413 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.294 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.latency volume: 97302278 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.295 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.latency volume: 82890817 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.296 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.latency volume: 420422303 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.297 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.latency volume: 95348408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.298 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.latency volume: 83683963 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.298 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 488988741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.299 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 83667442 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.299 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 61020876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.301 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.301 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.301 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.302 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.302 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.303 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:15:09.302645) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.302 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.303 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.304 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.304 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.305 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.306 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.306 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.307 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.307 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.308 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.309 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.309 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.309 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.310 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.311 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.311 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.311 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.311 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.311 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:15:09.311779) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.312 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.312 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.313 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.313 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.314 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.314 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.314 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.315 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.315 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.315 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.316 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.316 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.317 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.317 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.317 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.318 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.318 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.318 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.319 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:15:09.318447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.319 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.bytes volume: 41730048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.319 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.319 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.320 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.bytes volume: 41865216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.320 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.321 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.321 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.bytes volume: 41803776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.321 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.322 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.322 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.323 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.323 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.324 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.324 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.324 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.324 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.324 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.325 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.325 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:15:09.325112) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.325 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/cpu volume: 32270000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.326 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/cpu volume: 359490000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.326 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/cpu volume: 35620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.326 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/cpu volume: 40060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.327 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.327 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.327 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.328 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.328 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.328 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.329 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:15:09.328422) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.329 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.latency volume: 1105585581 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.329 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.latency volume: 12951810 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.329 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.330 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.latency volume: 1663393747 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.330 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.latency volume: 11989637 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.330 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.331 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.latency volume: 1181074077 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.331 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.latency volume: 12113149 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.332 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.332 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 1391100422 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.332 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 11839143 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.333 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.334 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.334 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.334 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.334 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.334 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.335 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.335 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.requests volume: 219 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.336 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.336 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:15:09.335071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.336 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.336 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.337 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.337 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.337 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.338 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.338 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.339 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.339 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.339 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.340 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.341 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.342 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.343 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:15:09.344 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:15:09 compute-0 nova_compute[186018]: 2026-01-05 21:15:09.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:15:09 compute-0 nova_compute[186018]: 2026-01-05 21:15:09.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:15:11 compute-0 nova_compute[186018]: 2026-01-05 21:15:11.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:15:11 compute-0 nova_compute[186018]: 2026-01-05 21:15:11.499 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:15:11 compute-0 nova_compute[186018]: 2026-01-05 21:15:11.499 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:15:11 compute-0 nova_compute[186018]: 2026-01-05 21:15:11.499 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:15:11 compute-0 nova_compute[186018]: 2026-01-05 21:15:11.500 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:15:11 compute-0 nova_compute[186018]: 2026-01-05 21:15:11.676 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:15:11 compute-0 nova_compute[186018]: 2026-01-05 21:15:11.778 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.102s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:15:11 compute-0 nova_compute[186018]: 2026-01-05 21:15:11.780 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:15:11 compute-0 nova_compute[186018]: 2026-01-05 21:15:11.852 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:15:11 compute-0 nova_compute[186018]: 2026-01-05 21:15:11.854 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:15:11 compute-0 nova_compute[186018]: 2026-01-05 21:15:11.924 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:15:11 compute-0 nova_compute[186018]: 2026-01-05 21:15:11.929 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:15:11 compute-0 nova_compute[186018]: 2026-01-05 21:15:11.990 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.004 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.111 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.112 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.157 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.193 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.195 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.280 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.281 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.346 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.353 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.410 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.411 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.509 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.509 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.598 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.600 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.680 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.690 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.751 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.753 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.825 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.827 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.915 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.917 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:15:12 compute-0 nova_compute[186018]: 2026-01-05 21:15:12.990 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:15:13 compute-0 nova_compute[186018]: 2026-01-05 21:15:13.596 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:15:13 compute-0 nova_compute[186018]: 2026-01-05 21:15:13.599 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4709MB free_disk=72.35673904418945GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:15:13 compute-0 nova_compute[186018]: 2026-01-05 21:15:13.600 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:15:13 compute-0 nova_compute[186018]: 2026-01-05 21:15:13.600 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:15:13 compute-0 nova_compute[186018]: 2026-01-05 21:15:13.695 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:13 compute-0 nova_compute[186018]: 2026-01-05 21:15:13.722 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:15:13 compute-0 nova_compute[186018]: 2026-01-05 21:15:13.723 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance d0894ce8-3815-41f8-a495-2329081a9ed2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:15:13 compute-0 nova_compute[186018]: 2026-01-05 21:15:13.723 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance bc5c255f-3071-4754-9c2a-302e6237171f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:15:13 compute-0 nova_compute[186018]: 2026-01-05 21:15:13.724 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 4f980272-c18f-4c66-9c04-8a07a7115de7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:15:13 compute-0 nova_compute[186018]: 2026-01-05 21:15:13.724 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:15:13 compute-0 nova_compute[186018]: 2026-01-05 21:15:13.725 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:15:13 compute-0 podman[244038]: 2026-01-05 21:15:13.810803691 +0000 UTC m=+0.146972960 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vendor=Red Hat, Inc., vcs-type=git, architecture=x86_64, config_id=openstack_network_exporter, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9)
Jan 05 21:15:13 compute-0 nova_compute[186018]: 2026-01-05 21:15:13.818 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:15:13 compute-0 nova_compute[186018]: 2026-01-05 21:15:13.835 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:15:13 compute-0 nova_compute[186018]: 2026-01-05 21:15:13.857 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:15:13 compute-0 nova_compute[186018]: 2026-01-05 21:15:13.857 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.257s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:15:14 compute-0 podman[244059]: 2026-01-05 21:15:14.078632692 +0000 UTC m=+0.201906566 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.build-date=20251202)
Jan 05 21:15:14 compute-0 nova_compute[186018]: 2026-01-05 21:15:14.857 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:15:14 compute-0 nova_compute[186018]: 2026-01-05 21:15:14.858 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:15:14 compute-0 nova_compute[186018]: 2026-01-05 21:15:14.859 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:15:17 compute-0 nova_compute[186018]: 2026-01-05 21:15:17.160 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:18 compute-0 nova_compute[186018]: 2026-01-05 21:15:18.701 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:19 compute-0 podman[244087]: 2026-01-05 21:15:19.772142326 +0000 UTC m=+0.108238050 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:15:19 compute-0 podman[244086]: 2026-01-05 21:15:19.794105664 +0000 UTC m=+0.128419592 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 05 21:15:22 compute-0 nova_compute[186018]: 2026-01-05 21:15:22.162 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:23 compute-0 nova_compute[186018]: 2026-01-05 21:15:23.705 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:27 compute-0 nova_compute[186018]: 2026-01-05 21:15:27.166 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:27 compute-0 podman[244128]: 2026-01-05 21:15:27.76011319 +0000 UTC m=+0.087675299 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:15:28 compute-0 nova_compute[186018]: 2026-01-05 21:15:28.709 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:29 compute-0 podman[202426]: time="2026-01-05T21:15:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:15:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:15:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:15:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:15:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4349 "" "Go-http-client/1.1"
Jan 05 21:15:31 compute-0 openstack_network_exporter[205720]: ERROR   21:15:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:15:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:15:31 compute-0 openstack_network_exporter[205720]: ERROR   21:15:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:15:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:15:32 compute-0 nova_compute[186018]: 2026-01-05 21:15:32.171 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:32 compute-0 podman[244153]: 2026-01-05 21:15:32.796401966 +0000 UTC m=+0.135438247 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 05 21:15:33 compute-0 nova_compute[186018]: 2026-01-05 21:15:33.713 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:35 compute-0 podman[244172]: 2026-01-05 21:15:35.787527135 +0000 UTC m=+0.127161298 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, vendor=Red Hat, Inc., version=9.4, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, release=1214.1726694543, config_id=kepler, container_name=kepler, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9)
Jan 05 21:15:37 compute-0 nova_compute[186018]: 2026-01-05 21:15:37.174 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:37 compute-0 podman[244192]: 2026-01-05 21:15:37.801098631 +0000 UTC m=+0.134577284 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_compute)
Jan 05 21:15:38 compute-0 nova_compute[186018]: 2026-01-05 21:15:38.717 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:42 compute-0 nova_compute[186018]: 2026-01-05 21:15:42.177 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:15:42.849 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:15:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:15:42.850 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:15:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:15:42.850 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:15:43 compute-0 nova_compute[186018]: 2026-01-05 21:15:43.720 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:44 compute-0 podman[244213]: 2026-01-05 21:15:44.778494328 +0000 UTC m=+0.107934012 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, distribution-scope=public, release=1755695350, config_id=openstack_network_exporter, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc.)
Jan 05 21:15:44 compute-0 podman[244212]: 2026-01-05 21:15:44.823323068 +0000 UTC m=+0.148461059 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:15:47 compute-0 nova_compute[186018]: 2026-01-05 21:15:47.180 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:48 compute-0 nova_compute[186018]: 2026-01-05 21:15:48.725 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:50 compute-0 podman[244258]: 2026-01-05 21:15:50.75545048 +0000 UTC m=+0.098806712 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 21:15:50 compute-0 podman[244259]: 2026-01-05 21:15:50.777779918 +0000 UTC m=+0.105108898 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 21:15:52 compute-0 nova_compute[186018]: 2026-01-05 21:15:52.182 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:53 compute-0 nova_compute[186018]: 2026-01-05 21:15:53.728 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:57 compute-0 nova_compute[186018]: 2026-01-05 21:15:57.185 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:58 compute-0 nova_compute[186018]: 2026-01-05 21:15:58.733 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:15:58 compute-0 podman[244298]: 2026-01-05 21:15:58.796218746 +0000 UTC m=+0.128024200 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:15:59 compute-0 podman[202426]: time="2026-01-05T21:15:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:15:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:15:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:15:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:15:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4366 "" "Go-http-client/1.1"
Jan 05 21:16:01 compute-0 openstack_network_exporter[205720]: ERROR   21:16:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:16:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:16:01 compute-0 openstack_network_exporter[205720]: ERROR   21:16:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:16:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:16:02 compute-0 nova_compute[186018]: 2026-01-05 21:16:02.189 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:02 compute-0 nova_compute[186018]: 2026-01-05 21:16:02.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:16:02 compute-0 nova_compute[186018]: 2026-01-05 21:16:02.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:16:03 compute-0 nova_compute[186018]: 2026-01-05 21:16:03.739 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:03 compute-0 podman[244330]: 2026-01-05 21:16:03.802628232 +0000 UTC m=+0.134884442 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 05 21:16:05 compute-0 nova_compute[186018]: 2026-01-05 21:16:05.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:16:06 compute-0 podman[244349]: 2026-01-05 21:16:06.778947059 +0000 UTC m=+0.113325224 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., distribution-scope=public, release-0.7.12=, managed_by=edpm_ansible, container_name=kepler, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_id=kepler, io.openshift.tags=base rhel9, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4)
Jan 05 21:16:07 compute-0 nova_compute[186018]: 2026-01-05 21:16:07.193 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:07 compute-0 nova_compute[186018]: 2026-01-05 21:16:07.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:16:07 compute-0 nova_compute[186018]: 2026-01-05 21:16:07.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:16:07 compute-0 nova_compute[186018]: 2026-01-05 21:16:07.846 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-bc5c255f-3071-4754-9c2a-302e6237171f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:16:07 compute-0 nova_compute[186018]: 2026-01-05 21:16:07.847 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-bc5c255f-3071-4754-9c2a-302e6237171f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:16:07 compute-0 nova_compute[186018]: 2026-01-05 21:16:07.847 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:16:08 compute-0 nova_compute[186018]: 2026-01-05 21:16:08.742 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:08 compute-0 podman[244369]: 2026-01-05 21:16:08.743164468 +0000 UTC m=+0.097088407 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true)
Jan 05 21:16:09 compute-0 nova_compute[186018]: 2026-01-05 21:16:09.104 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Updating instance_info_cache with network_info: [{"id": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "address": "fa:16:3e:22:cf:e6", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.15", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb09e12-63", "ovs_interfaceid": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:16:09 compute-0 nova_compute[186018]: 2026-01-05 21:16:09.124 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-bc5c255f-3071-4754-9c2a-302e6237171f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:16:09 compute-0 nova_compute[186018]: 2026-01-05 21:16:09.124 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:16:09 compute-0 nova_compute[186018]: 2026-01-05 21:16:09.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:16:09 compute-0 nova_compute[186018]: 2026-01-05 21:16:09.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.059 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.228 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Triggering sync for uuid f64de408-e6d1-4f7f-9f94-e20a4c83a87a _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.230 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Triggering sync for uuid d0894ce8-3815-41f8-a495-2329081a9ed2 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.231 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Triggering sync for uuid bc5c255f-3071-4754-9c2a-302e6237171f _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.232 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Triggering sync for uuid 4f980272-c18f-4c66-9c04-8a07a7115de7 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.233 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.234 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.235 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "d0894ce8-3815-41f8-a495-2329081a9ed2" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.236 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.236 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "bc5c255f-3071-4754-9c2a-302e6237171f" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.237 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "bc5c255f-3071-4754-9c2a-302e6237171f" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.238 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "4f980272-c18f-4c66-9c04-8a07a7115de7" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.239 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.290 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.057s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.291 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.056s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.318 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.079s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.328 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "bc5c255f-3071-4754-9c2a-302e6237171f" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.091s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.636 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:16:10 compute-0 nova_compute[186018]: 2026-01-05 21:16:10.636 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:16:12 compute-0 nova_compute[186018]: 2026-01-05 21:16:12.195 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:12 compute-0 nova_compute[186018]: 2026-01-05 21:16:12.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:16:12 compute-0 nova_compute[186018]: 2026-01-05 21:16:12.506 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:16:12 compute-0 nova_compute[186018]: 2026-01-05 21:16:12.507 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:16:12 compute-0 nova_compute[186018]: 2026-01-05 21:16:12.508 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:16:12 compute-0 nova_compute[186018]: 2026-01-05 21:16:12.508 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:16:12 compute-0 nova_compute[186018]: 2026-01-05 21:16:12.634 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:16:12 compute-0 nova_compute[186018]: 2026-01-05 21:16:12.727 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:16:12 compute-0 nova_compute[186018]: 2026-01-05 21:16:12.728 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:16:12 compute-0 nova_compute[186018]: 2026-01-05 21:16:12.793 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:16:12 compute-0 nova_compute[186018]: 2026-01-05 21:16:12.795 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:16:12 compute-0 nova_compute[186018]: 2026-01-05 21:16:12.881 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.087s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:16:12 compute-0 nova_compute[186018]: 2026-01-05 21:16:12.884 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:16:12 compute-0 nova_compute[186018]: 2026-01-05 21:16:12.980 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:16:12 compute-0 nova_compute[186018]: 2026-01-05 21:16:12.988 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.045 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.046 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.103 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.104 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.183 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.185 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.270 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.085s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.283 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.349 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.351 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.445 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.447 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.539 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.541 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.604 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.613 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.679 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.680 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.736 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.738 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.754 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.807 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.808 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:16:13 compute-0 nova_compute[186018]: 2026-01-05 21:16:13.889 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:16:14 compute-0 nova_compute[186018]: 2026-01-05 21:16:14.346 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:16:14 compute-0 nova_compute[186018]: 2026-01-05 21:16:14.348 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4619MB free_disk=72.35673904418945GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:16:14 compute-0 nova_compute[186018]: 2026-01-05 21:16:14.348 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:16:14 compute-0 nova_compute[186018]: 2026-01-05 21:16:14.349 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:16:14 compute-0 nova_compute[186018]: 2026-01-05 21:16:14.612 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:16:14 compute-0 nova_compute[186018]: 2026-01-05 21:16:14.613 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance d0894ce8-3815-41f8-a495-2329081a9ed2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:16:14 compute-0 nova_compute[186018]: 2026-01-05 21:16:14.614 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance bc5c255f-3071-4754-9c2a-302e6237171f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:16:14 compute-0 nova_compute[186018]: 2026-01-05 21:16:14.614 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 4f980272-c18f-4c66-9c04-8a07a7115de7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:16:14 compute-0 nova_compute[186018]: 2026-01-05 21:16:14.615 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:16:14 compute-0 nova_compute[186018]: 2026-01-05 21:16:14.615 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:16:14 compute-0 nova_compute[186018]: 2026-01-05 21:16:14.868 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:16:14 compute-0 nova_compute[186018]: 2026-01-05 21:16:14.964 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:16:14 compute-0 nova_compute[186018]: 2026-01-05 21:16:14.967 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:16:14 compute-0 nova_compute[186018]: 2026-01-05 21:16:14.968 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.619s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:16:14 compute-0 nova_compute[186018]: 2026-01-05 21:16:14.969 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:16:14 compute-0 nova_compute[186018]: 2026-01-05 21:16:14.970 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 21:16:15 compute-0 nova_compute[186018]: 2026-01-05 21:16:15.097 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 05 21:16:15 compute-0 podman[244438]: 2026-01-05 21:16:15.771390962 +0000 UTC m=+0.119506907 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 21:16:15 compute-0 podman[244439]: 2026-01-05 21:16:15.803041146 +0000 UTC m=+0.130889577 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., version=9.6)
Jan 05 21:16:16 compute-0 nova_compute[186018]: 2026-01-05 21:16:16.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:16:16 compute-0 nova_compute[186018]: 2026-01-05 21:16:16.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:16:16 compute-0 nova_compute[186018]: 2026-01-05 21:16:16.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:16:16 compute-0 nova_compute[186018]: 2026-01-05 21:16:16.463 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:16:17 compute-0 nova_compute[186018]: 2026-01-05 21:16:17.200 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:18 compute-0 nova_compute[186018]: 2026-01-05 21:16:18.474 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:16:18 compute-0 nova_compute[186018]: 2026-01-05 21:16:18.758 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:21 compute-0 podman[244484]: 2026-01-05 21:16:21.760863042 +0000 UTC m=+0.102679284 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 21:16:21 compute-0 podman[244485]: 2026-01-05 21:16:21.802529248 +0000 UTC m=+0.131671777 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:16:22 compute-0 nova_compute[186018]: 2026-01-05 21:16:22.203 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:23 compute-0 nova_compute[186018]: 2026-01-05 21:16:23.764 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:27 compute-0 nova_compute[186018]: 2026-01-05 21:16:27.206 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:28 compute-0 nova_compute[186018]: 2026-01-05 21:16:28.768 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:29 compute-0 podman[202426]: time="2026-01-05T21:16:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:16:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:16:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:16:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:16:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4357 "" "Go-http-client/1.1"
Jan 05 21:16:29 compute-0 podman[244524]: 2026-01-05 21:16:29.764564376 +0000 UTC m=+0.110838259 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:16:31 compute-0 openstack_network_exporter[205720]: ERROR   21:16:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:16:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:16:31 compute-0 openstack_network_exporter[205720]: ERROR   21:16:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:16:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:16:32 compute-0 nova_compute[186018]: 2026-01-05 21:16:32.210 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:33 compute-0 nova_compute[186018]: 2026-01-05 21:16:33.773 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:34 compute-0 podman[244547]: 2026-01-05 21:16:34.772128643 +0000 UTC m=+0.118318556 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Jan 05 21:16:37 compute-0 nova_compute[186018]: 2026-01-05 21:16:37.213 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:37 compute-0 podman[244566]: 2026-01-05 21:16:37.732769798 +0000 UTC m=+0.076264309 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, vcs-type=git, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, name=ubi9, build-date=2024-09-18T21:23:30, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vendor=Red Hat, Inc., io.openshift.expose-services=, container_name=kepler, release-0.7.12=, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64)
Jan 05 21:16:38 compute-0 nova_compute[186018]: 2026-01-05 21:16:38.779 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:39 compute-0 podman[244586]: 2026-01-05 21:16:39.809072133 +0000 UTC m=+0.137261034 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 05 21:16:42 compute-0 nova_compute[186018]: 2026-01-05 21:16:42.217 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:16:42.850 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:16:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:16:42.851 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:16:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:16:42.852 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:16:43 compute-0 nova_compute[186018]: 2026-01-05 21:16:43.784 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:46 compute-0 podman[244608]: 2026-01-05 21:16:46.776123395 +0000 UTC m=+0.097970160 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.buildah.version=1.33.7, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, release=1755695350, build-date=2025-08-20T13:12:41, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Jan 05 21:16:46 compute-0 podman[244607]: 2026-01-05 21:16:46.82456925 +0000 UTC m=+0.161403400 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, config_id=ovn_controller, org.label-schema.license=GPLv2)
Jan 05 21:16:47 compute-0 nova_compute[186018]: 2026-01-05 21:16:47.220 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:48 compute-0 nova_compute[186018]: 2026-01-05 21:16:48.789 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:52 compute-0 nova_compute[186018]: 2026-01-05 21:16:52.223 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:52 compute-0 podman[244655]: 2026-01-05 21:16:52.764516675 +0000 UTC m=+0.096615835 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:16:52 compute-0 podman[244654]: 2026-01-05 21:16:52.769919257 +0000 UTC m=+0.119590519 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 05 21:16:53 compute-0 nova_compute[186018]: 2026-01-05 21:16:53.793 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:57 compute-0 nova_compute[186018]: 2026-01-05 21:16:57.226 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:58 compute-0 nova_compute[186018]: 2026-01-05 21:16:58.799 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:16:59 compute-0 podman[202426]: time="2026-01-05T21:16:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:16:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:16:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:16:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:16:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4365 "" "Go-http-client/1.1"
Jan 05 21:17:00 compute-0 podman[244694]: 2026-01-05 21:17:00.751007164 +0000 UTC m=+0.096714837 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:17:01 compute-0 openstack_network_exporter[205720]: ERROR   21:17:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:17:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:17:01 compute-0 openstack_network_exporter[205720]: ERROR   21:17:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:17:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:17:02 compute-0 nova_compute[186018]: 2026-01-05 21:17:02.228 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:02 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 05 21:17:03 compute-0 nova_compute[186018]: 2026-01-05 21:17:03.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:17:03 compute-0 nova_compute[186018]: 2026-01-05 21:17:03.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:17:03 compute-0 nova_compute[186018]: 2026-01-05 21:17:03.802 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:05 compute-0 podman[244718]: 2026-01-05 21:17:05.798102327 +0000 UTC m=+0.129621740 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 05 21:17:07 compute-0 nova_compute[186018]: 2026-01-05 21:17:07.231 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:07 compute-0 nova_compute[186018]: 2026-01-05 21:17:07.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:17:07 compute-0 nova_compute[186018]: 2026-01-05 21:17:07.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:17:07 compute-0 nova_compute[186018]: 2026-01-05 21:17:07.463 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.783 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.783 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1d6ba0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.797 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4f980272-c18f-4c66-9c04-8a07a7115de7', 'name': 'vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {'metering.server_group': 'a6371b97-6a0c-4b37-9443-eaf5410da4a4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.803 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'd0894ce8-3815-41f8-a495-2329081a9ed2', 'name': 'vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000002', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {'metering.server_group': 'a6371b97-6a0c-4b37-9443-eaf5410da4a4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.808 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bc5c255f-3071-4754-9c2a-302e6237171f', 'name': 'vn-ezpxu27-aposstbqe4u5-3vxh7p6lsvtd-vnf-iw64z6vmzv3z', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {'metering.server_group': 'a6371b97-6a0c-4b37-9443-eaf5410da4a4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.814 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f64de408-e6d1-4f7f-9f94-e20a4c83a87a', 'name': 'test_0', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.815 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.815 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.815 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.816 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.816 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:17:07.815760) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.818 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.818 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.818 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.819 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.819 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.819 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.820 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:17:07.819480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.827 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.834 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.packets volume: 53 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.840 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.846 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.847 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.848 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.848 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.848 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.848 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.849 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.849 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.849 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.850 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:17:07.849155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.850 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.851 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.851 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.852 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.852 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.852 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.853 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.packets volume: 65 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:17:07.852538) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.853 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.packets volume: 21 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.853 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.854 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.854 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.854 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.855 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.855 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.855 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.856 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.856 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.856 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:17:07.855509) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.856 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.857 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.857 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.857 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.857 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.858 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:17:07.857524) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.858 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.bytes volume: 7530 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.858 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.bytes volume: 2286 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.859 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.859 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.860 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.860 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.860 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.860 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.860 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.bytes.delta volume: 620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.861 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.861 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.861 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.862 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.863 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.863 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:17:07.860637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.863 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.863 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.863 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.864 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.864 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.864 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.864 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.865 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:17:07.864046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.865 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.865 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.866 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.867 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.867 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.867 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.867 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.868 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.868 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.868 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.868 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:17:07.867499) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.869 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.869 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.869 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.870 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.870 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.870 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.871 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:17:07.870478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.902 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.902 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.903 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 nova_compute[186018]: 2026-01-05 21:17:07.919 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:17:07 compute-0 nova_compute[186018]: 2026-01-05 21:17:07.919 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:17:07 compute-0 nova_compute[186018]: 2026-01-05 21:17:07.920 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:17:07 compute-0 nova_compute[186018]: 2026-01-05 21:17:07.921 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f64de408-e6d1-4f7f-9f94-e20a4c83a87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.938 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.allocation volume: 21962752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.938 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.939 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.976 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.976 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:07.977 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.009 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.009 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.010 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.011 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.011 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.011 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.012 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.012 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.012 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.012 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.013 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.014 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.014 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.015 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.016 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:17:08.012678) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.017 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.017 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.018 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.019 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.020 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.021 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.022 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:17:08.021743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.021 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.022 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.023 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.023 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.023 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.024 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.024 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.024 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.024 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.024 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.024 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.025 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:17:08.024880) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.066 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/memory.usage volume: 49.1015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.103 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/memory.usage volume: 48.97265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.125 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/memory.usage volume: 49.04296875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.157 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/memory.usage volume: 48.76171875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.159 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.159 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.160 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.160 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.162 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.162 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:17:08.161974) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.163 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.163 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.163 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.164 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.164 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.164 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.164 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.164 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.164 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.164 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.165 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.165 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:17:08.164817) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.165 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.165 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.166 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.166 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.166 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.166 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.167 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.167 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.167 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.167 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.168 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.168 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.168 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.168 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.168 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.168 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:17:08.168388) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.263 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.264 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.264 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.355 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.bytes volume: 23325184 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.356 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.356 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.424 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.424 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.425 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.487 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.487 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.487 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.488 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.488 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.488 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.488 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.488 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.489 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.489 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.bytes volume: 1486 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.489 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/network.incoming.bytes volume: 8322 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.489 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.490 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes volume: 2136 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.489 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:17:08.489039) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.490 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.490 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.490 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.490 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.490 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.491 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.491 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.latency volume: 461858933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.491 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.latency volume: 95970893 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.491 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:17:08.491029) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.491 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.latency volume: 69940491 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.491 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.latency volume: 441838413 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.492 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.latency volume: 97302278 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.492 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.latency volume: 82890817 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.492 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.latency volume: 420422303 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.492 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.latency volume: 95348408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.493 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.latency volume: 83683963 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.493 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 488988741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.493 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 83667442 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.493 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 61020876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.494 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.494 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.494 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.494 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.494 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.494 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.495 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.495 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.495 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.495 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.requests volume: 844 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.496 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.496 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.496 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.497 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.497 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.497 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:17:08.494824) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.497 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.497 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.497 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.498 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.498 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.498 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.498 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.498 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.499 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.499 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.499 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.499 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:17:08.498965) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.499 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.499 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.500 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.500 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.500 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.500 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.501 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.501 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.501 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.501 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.502 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.502 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.502 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.502 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.502 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.502 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.502 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.bytes volume: 41828352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.503 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.503 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.503 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.bytes volume: 41865216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.503 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.504 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.504 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.bytes volume: 41803776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.504 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.505 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.505 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.505 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:17:08.502826) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.505 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.505 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.506 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.506 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.506 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.506 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.507 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.507 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.507 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/cpu volume: 34060000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.507 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/cpu volume: 361180000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.507 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:17:08.507084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.507 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/cpu volume: 37320000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.508 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/cpu volume: 41730000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.508 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.508 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.508 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.508 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.508 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.509 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.509 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.latency volume: 1129111979 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.509 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.latency volume: 12951810 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.509 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.509 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:17:08.508963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.510 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.latency volume: 1663393747 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.510 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.latency volume: 11989637 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.510 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.510 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.latency volume: 1181074077 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.511 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.latency volume: 12113149 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.511 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.511 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 1391100422 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.511 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 11839143 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.511 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.512 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.512 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.512 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.512 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.512 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.512 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.513 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.513 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.513 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:17:08.512859) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.513 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.513 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.requests volume: 240 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.514 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.514 14 DEBUG ceilometer.compute.pollsters [-] d0894ce8-3815-41f8-a495-2329081a9ed2/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.514 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.514 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.514 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.515 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.515 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.515 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.516 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.516 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.517 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.518 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:17:08.519 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:17:08 compute-0 podman[244738]: 2026-01-05 21:17:08.739206567 +0000 UTC m=+0.089678072 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, container_name=kepler, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.openshift.expose-services=, name=ubi9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_id=kepler, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, release-0.7.12=, com.redhat.component=ubi9-container)
Jan 05 21:17:08 compute-0 nova_compute[186018]: 2026-01-05 21:17:08.805 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:09 compute-0 nova_compute[186018]: 2026-01-05 21:17:09.088 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updating instance_info_cache with network_info: [{"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:17:09 compute-0 nova_compute[186018]: 2026-01-05 21:17:09.106 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:17:09 compute-0 nova_compute[186018]: 2026-01-05 21:17:09.107 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:17:09 compute-0 nova_compute[186018]: 2026-01-05 21:17:09.108 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:17:10 compute-0 podman[244757]: 2026-01-05 21:17:10.771020074 +0000 UTC m=+0.118301293 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute)
Jan 05 21:17:11 compute-0 nova_compute[186018]: 2026-01-05 21:17:11.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:17:11 compute-0 nova_compute[186018]: 2026-01-05 21:17:11.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:17:12 compute-0 nova_compute[186018]: 2026-01-05 21:17:12.235 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:13 compute-0 nova_compute[186018]: 2026-01-05 21:17:13.811 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:14 compute-0 nova_compute[186018]: 2026-01-05 21:17:14.459 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:17:14 compute-0 nova_compute[186018]: 2026-01-05 21:17:14.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:17:14 compute-0 nova_compute[186018]: 2026-01-05 21:17:14.487 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:17:14 compute-0 nova_compute[186018]: 2026-01-05 21:17:14.488 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:17:14 compute-0 nova_compute[186018]: 2026-01-05 21:17:14.489 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:17:14 compute-0 nova_compute[186018]: 2026-01-05 21:17:14.489 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:17:14 compute-0 nova_compute[186018]: 2026-01-05 21:17:14.619 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:17:14 compute-0 nova_compute[186018]: 2026-01-05 21:17:14.753 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.134s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:17:14 compute-0 nova_compute[186018]: 2026-01-05 21:17:14.755 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:17:14 compute-0 nova_compute[186018]: 2026-01-05 21:17:14.836 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:17:14 compute-0 nova_compute[186018]: 2026-01-05 21:17:14.838 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:17:14 compute-0 nova_compute[186018]: 2026-01-05 21:17:14.934 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:17:14 compute-0 nova_compute[186018]: 2026-01-05 21:17:14.937 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.015 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.025 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.090 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.092 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.151 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.153 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.217 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.218 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.284 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2/disk.eph0 --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.294 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.363 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.364 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.440 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.442 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.502 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.503 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.566 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.574 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.672 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.674 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.751 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.753 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.817 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.818 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:17:15 compute-0 nova_compute[186018]: 2026-01-05 21:17:15.896 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.347 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.349 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4608MB free_disk=72.35673904418945GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.349 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.349 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.460 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.460 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance d0894ce8-3815-41f8-a495-2329081a9ed2 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.460 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance bc5c255f-3071-4754-9c2a-302e6237171f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.460 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 4f980272-c18f-4c66-9c04-8a07a7115de7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.461 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.461 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2560MB phys_disk=79GB used_disk=8GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.479 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing inventories for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.497 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating ProviderTree inventory for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.498 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating inventory in ProviderTree for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.514 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing aggregate associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.531 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing trait associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_NODE,HW_CPU_X86_BMI,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.638 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.653 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.656 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:17:16 compute-0 nova_compute[186018]: 2026-01-05 21:17:16.656 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.307s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:17:17 compute-0 nova_compute[186018]: 2026-01-05 21:17:17.238 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:17 compute-0 podman[244827]: 2026-01-05 21:17:17.836754057 +0000 UTC m=+0.163570450 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, architecture=x86_64, name=ubi9-minimal, io.buildah.version=1.33.7, release=1755695350, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container)
Jan 05 21:17:17 compute-0 podman[244826]: 2026-01-05 21:17:17.891954045 +0000 UTC m=+0.226084980 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 05 21:17:18 compute-0 nova_compute[186018]: 2026-01-05 21:17:18.657 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:17:18 compute-0 nova_compute[186018]: 2026-01-05 21:17:18.658 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:17:18 compute-0 nova_compute[186018]: 2026-01-05 21:17:18.816 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:22 compute-0 nova_compute[186018]: 2026-01-05 21:17:22.241 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:23 compute-0 podman[244872]: 2026-01-05 21:17:23.797004814 +0000 UTC m=+0.123200531 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:17:23 compute-0 podman[244871]: 2026-01-05 21:17:23.797773715 +0000 UTC m=+0.131770577 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:17:23 compute-0 nova_compute[186018]: 2026-01-05 21:17:23.819 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:27 compute-0 nova_compute[186018]: 2026-01-05 21:17:27.244 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:28 compute-0 nova_compute[186018]: 2026-01-05 21:17:28.821 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:29 compute-0 podman[202426]: time="2026-01-05T21:17:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:17:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:17:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:17:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:17:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4366 "" "Go-http-client/1.1"
Jan 05 21:17:31 compute-0 openstack_network_exporter[205720]: ERROR   21:17:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:17:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:17:31 compute-0 openstack_network_exporter[205720]: ERROR   21:17:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:17:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:17:31 compute-0 podman[244911]: 2026-01-05 21:17:31.747899409 +0000 UTC m=+0.096351768 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 21:17:32 compute-0 nova_compute[186018]: 2026-01-05 21:17:32.247 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:33 compute-0 nova_compute[186018]: 2026-01-05 21:17:33.825 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:36 compute-0 podman[244934]: 2026-01-05 21:17:36.764047239 +0000 UTC m=+0.101349188 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 05 21:17:37 compute-0 nova_compute[186018]: 2026-01-05 21:17:37.252 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:38 compute-0 nova_compute[186018]: 2026-01-05 21:17:38.830 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:39 compute-0 podman[244953]: 2026-01-05 21:17:39.79209914 +0000 UTC m=+0.132376112 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, com.redhat.component=ubi9-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=kepler, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, architecture=x86_64, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc.)
Jan 05 21:17:41 compute-0 podman[244971]: 2026-01-05 21:17:41.813952116 +0000 UTC m=+0.148127075 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:17:42 compute-0 nova_compute[186018]: 2026-01-05 21:17:42.256 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:17:42.851 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:17:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:17:42.852 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:17:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:17:42.852 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:17:43 compute-0 nova_compute[186018]: 2026-01-05 21:17:43.833 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:47 compute-0 nova_compute[186018]: 2026-01-05 21:17:47.259 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:48 compute-0 podman[244992]: 2026-01-05 21:17:48.81064071 +0000 UTC m=+0.132724391 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, distribution-scope=public, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, name=ubi9-minimal, release=1755695350, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 05 21:17:48 compute-0 nova_compute[186018]: 2026-01-05 21:17:48.836 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:48 compute-0 podman[244991]: 2026-01-05 21:17:48.843905963 +0000 UTC m=+0.177879466 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 05 21:17:52 compute-0 nova_compute[186018]: 2026-01-05 21:17:52.263 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:53 compute-0 nova_compute[186018]: 2026-01-05 21:17:53.840 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:54 compute-0 podman[245039]: 2026-01-05 21:17:54.774071249 +0000 UTC m=+0.117194364 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 05 21:17:54 compute-0 podman[245040]: 2026-01-05 21:17:54.801981711 +0000 UTC m=+0.134044036 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:17:57 compute-0 nova_compute[186018]: 2026-01-05 21:17:57.266 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:58 compute-0 nova_compute[186018]: 2026-01-05 21:17:58.845 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:17:59 compute-0 podman[202426]: time="2026-01-05T21:17:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:17:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:17:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:17:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:17:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4368 "" "Go-http-client/1.1"
Jan 05 21:18:01 compute-0 openstack_network_exporter[205720]: ERROR   21:18:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:18:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:18:01 compute-0 openstack_network_exporter[205720]: ERROR   21:18:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:18:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:18:02 compute-0 nova_compute[186018]: 2026-01-05 21:18:02.269 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:02 compute-0 podman[245081]: 2026-01-05 21:18:02.770857717 +0000 UTC m=+0.112846050 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:18:03 compute-0 nova_compute[186018]: 2026-01-05 21:18:03.848 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.512 186022 DEBUG oslo_concurrency.lockutils [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "d0894ce8-3815-41f8-a495-2329081a9ed2" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.513 186022 DEBUG oslo_concurrency.lockutils [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.514 186022 DEBUG oslo_concurrency.lockutils [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.515 186022 DEBUG oslo_concurrency.lockutils [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.516 186022 DEBUG oslo_concurrency.lockutils [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.519 186022 INFO nova.compute.manager [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Terminating instance
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.521 186022 DEBUG nova.compute.manager [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 21:18:05 compute-0 kernel: tapf3274143-07 (unregistering): left promiscuous mode
Jan 05 21:18:05 compute-0 NetworkManager[56598]: <info>  [1767647885.5824] device (tapf3274143-07): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 21:18:05 compute-0 ovn_controller[98229]: 2026-01-05T21:18:05Z|00056|binding|INFO|Releasing lport f3274143-07c8-4956-b27c-98507a2443b2 from this chassis (sb_readonly=0)
Jan 05 21:18:05 compute-0 ovn_controller[98229]: 2026-01-05T21:18:05Z|00057|binding|INFO|Setting lport f3274143-07c8-4956-b27c-98507a2443b2 down in Southbound
Jan 05 21:18:05 compute-0 ovn_controller[98229]: 2026-01-05T21:18:05Z|00058|binding|INFO|Removing iface tapf3274143-07 ovn-installed in OVS
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.600 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.602 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:05 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:05.608 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:13:ee:71 192.168.0.216'], port_security=['fa:16:3e:13:ee:71 192.168.0.216'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-3m37qezpxu27-a47tklni2ayz-qhdfnok533vd-port-gbbzrm5s4gfv', 'neutron:cidrs': '192.168.0.216/24', 'neutron:device_id': 'd0894ce8-3815-41f8-a495-2329081a9ed2', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-3m37qezpxu27-a47tklni2ayz-qhdfnok533vd-port-gbbzrm5s4gfv', 'neutron:project_id': '704814115a61471f9b45484171f67b5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '02c7eb5a-98f1-49fb-80bc-9ee05faa964b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.243', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0df9bc1d-5579-4059-ac66-a97b4c7350db, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=f3274143-07c8-4956-b27c-98507a2443b2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:18:05 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:05.610 107689 INFO neutron.agent.ovn.metadata.agent [-] Port f3274143-07c8-4956-b27c-98507a2443b2 in datapath b871481f-0445-42f2-8b6a-2e8572ae5b49 unbound from our chassis
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.612 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:05 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:05.612 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b871481f-0445-42f2-8b6a-2e8572ae5b49
Jan 05 21:18:05 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:05.630 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[6f31a4f0-4ba5-493b-8968-da503e1dfe64]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:18:05 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Deactivated successfully.
Jan 05 21:18:05 compute-0 systemd[1]: machine-qemu\x2d2\x2dinstance\x2d00000002.scope: Consumed 7min 19.664s CPU time.
Jan 05 21:18:05 compute-0 systemd-machined[157312]: Machine qemu-2-instance-00000002 terminated.
Jan 05 21:18:05 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:05.671 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[674ac464-5525-4e2d-9183-1c74c3c44a19]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:18:05 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:05.675 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[8c59615c-5b03-43ff-aedb-4cfd7b848f15]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:18:05 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:05.700 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[380231f0-5c9c-4783-a025-7c2bf8c010b4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:18:05 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:05.722 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[82bb90da-34a7-4a59-9882-3a5636ae2621]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb871481f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:f0:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 11, 'rx_bytes': 574, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 11, 'rx_bytes': 574, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393151, 'reachable_time': 34860, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245116, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:18:05 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:05.751 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[7eb52fc7-055e-4174-8f20-4d7b9f2ad2de]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb871481f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393170, 'tstamp': 393170}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245117, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapb871481f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393175, 'tstamp': 393175}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245117, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:18:05 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:05.755 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb871481f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.758 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.769 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:05 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:05.771 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb871481f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:18:05 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:05.771 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:18:05 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:05.772 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb871481f-00, col_values=(('external_ids', {'iface-id': 'a16ac18f-2e71-4427-b368-840ecfba3d33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:18:05 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:05.773 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.835 186022 INFO nova.virt.libvirt.driver [-] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Instance destroyed successfully.
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.835 186022 DEBUG nova.objects.instance [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lazy-loading 'resources' on Instance uuid d0894ce8-3815-41f8-a495-2329081a9ed2 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.851 186022 DEBUG nova.virt.libvirt.vif [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:07:33Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ezpxu27-a47tklni2ayz-qhdfnok533vd-vnf-yh7a6zr6scmc',id=2,image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-05T21:07:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a6371b97-6a0c-4b37-9443-eaf5410da4a4'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='704814115a61471f9b45484171f67b5f',ramdisk_id='',reservation_id='r-aoba20n9',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T21:07:46Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT04NTE0MDUyNDkyNjkwODkyNTM1PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTg1MTQwNTI0OTI2OTA4OTI1MzU9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09ODUxNDA1MjQ5MjY5MDg5MjUzNT09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTg1MTQwNTI0OTI2OTA4OTI1MzU9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT04NTE0MDUyNDkyNjkwODkyNTM1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT04NTE0MDUyNDkyNjkwODkyNTM1PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 05 21:18:05 compute-0 nova_compute[186018]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09ODUxNDA1MjQ5MjY5MDg5MjUzNT09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTg1MTQwNTI0OTI2OTA4OTI1MzU9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT04NTE0MDUyNDkyNjkwODkyNTM1PT0tLQo=',user_id='41f377b42540490198f271301cf5fe90',uuid=d0894ce8-3815-41f8-a495-2329081a9ed2,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "f3274143-07c8-4956-b27c-98507a2443b2", "address": "fa:16:3e:13:ee:71", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.216", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3274143-07", "ovs_interfaceid": "f3274143-07c8-4956-b27c-98507a2443b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.851 186022 DEBUG nova.network.os_vif_util [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converting VIF {"id": "f3274143-07c8-4956-b27c-98507a2443b2", "address": "fa:16:3e:13:ee:71", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.216", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.243", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3274143-07", "ovs_interfaceid": "f3274143-07c8-4956-b27c-98507a2443b2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.853 186022 DEBUG nova.network.os_vif_util [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:13:ee:71,bridge_name='br-int',has_traffic_filtering=True,id=f3274143-07c8-4956-b27c-98507a2443b2,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf3274143-07') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.853 186022 DEBUG os_vif [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:13:ee:71,bridge_name='br-int',has_traffic_filtering=True,id=f3274143-07c8-4956-b27c-98507a2443b2,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf3274143-07') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.857 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.857 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapf3274143-07, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.860 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.862 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.863 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.868 186022 INFO os_vif [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:13:ee:71,bridge_name='br-int',has_traffic_filtering=True,id=f3274143-07c8-4956-b27c-98507a2443b2,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tapf3274143-07')
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.869 186022 INFO nova.virt.libvirt.driver [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Deleting instance files /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2_del
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.871 186022 INFO nova.virt.libvirt.driver [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Deletion of /var/lib/nova/instances/d0894ce8-3815-41f8-a495-2329081a9ed2_del complete
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.914 186022 DEBUG nova.compute.manager [req-9b9841c0-dcdc-4d37-b212-7f62f0beb2c3 req-40a9f576-41aa-417e-ac03-4ae305e26feb 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Received event network-vif-unplugged-f3274143-07c8-4956-b27c-98507a2443b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.915 186022 DEBUG oslo_concurrency.lockutils [req-9b9841c0-dcdc-4d37-b212-7f62f0beb2c3 req-40a9f576-41aa-417e-ac03-4ae305e26feb 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.915 186022 DEBUG oslo_concurrency.lockutils [req-9b9841c0-dcdc-4d37-b212-7f62f0beb2c3 req-40a9f576-41aa-417e-ac03-4ae305e26feb 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.915 186022 DEBUG oslo_concurrency.lockutils [req-9b9841c0-dcdc-4d37-b212-7f62f0beb2c3 req-40a9f576-41aa-417e-ac03-4ae305e26feb 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.915 186022 DEBUG nova.compute.manager [req-9b9841c0-dcdc-4d37-b212-7f62f0beb2c3 req-40a9f576-41aa-417e-ac03-4ae305e26feb 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] No waiting events found dispatching network-vif-unplugged-f3274143-07c8-4956-b27c-98507a2443b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.916 186022 DEBUG nova.compute.manager [req-9b9841c0-dcdc-4d37-b212-7f62f0beb2c3 req-40a9f576-41aa-417e-ac03-4ae305e26feb 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Received event network-vif-unplugged-f3274143-07c8-4956-b27c-98507a2443b2 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.974 186022 DEBUG nova.virt.libvirt.host [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Checking UEFI support for host arch (x86_64) supports_uefi /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1754
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.974 186022 INFO nova.virt.libvirt.host [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] UEFI support detected
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.978 186022 INFO nova.compute.manager [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Took 0.46 seconds to destroy the instance on the hypervisor.
Jan 05 21:18:05 compute-0 rsyslogd[237695]: message too long (8192) with configured size 8096, begin of message is: 2026-01-05 21:18:05.851 186022 DEBUG nova.virt.libvirt.vif [None req-bd056f2a-a2 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.979 186022 DEBUG oslo.service.loopingcall [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.979 186022 DEBUG nova.compute.manager [-] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 21:18:05 compute-0 nova_compute[186018]: 2026-01-05 21:18:05.979 186022 DEBUG nova.network.neutron [-] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 21:18:06 compute-0 nova_compute[186018]: 2026-01-05 21:18:06.852 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:06.853 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=7, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=6) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:18:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:06.856 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:18:07 compute-0 nova_compute[186018]: 2026-01-05 21:18:07.272 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:07 compute-0 podman[245140]: 2026-01-05 21:18:07.809956639 +0000 UTC m=+0.136335306 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:18:08 compute-0 nova_compute[186018]: 2026-01-05 21:18:08.130 186022 DEBUG nova.compute.manager [req-c2a43372-1e7a-4203-b2ac-4c68d0c20116 req-fee5b435-59f0-400e-a3c7-c7686a0c33ee 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Received event network-vif-plugged-f3274143-07c8-4956-b27c-98507a2443b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:18:08 compute-0 nova_compute[186018]: 2026-01-05 21:18:08.131 186022 DEBUG oslo_concurrency.lockutils [req-c2a43372-1e7a-4203-b2ac-4c68d0c20116 req-fee5b435-59f0-400e-a3c7-c7686a0c33ee 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:18:08 compute-0 nova_compute[186018]: 2026-01-05 21:18:08.132 186022 DEBUG oslo_concurrency.lockutils [req-c2a43372-1e7a-4203-b2ac-4c68d0c20116 req-fee5b435-59f0-400e-a3c7-c7686a0c33ee 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:18:08 compute-0 nova_compute[186018]: 2026-01-05 21:18:08.132 186022 DEBUG oslo_concurrency.lockutils [req-c2a43372-1e7a-4203-b2ac-4c68d0c20116 req-fee5b435-59f0-400e-a3c7-c7686a0c33ee 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:18:08 compute-0 nova_compute[186018]: 2026-01-05 21:18:08.133 186022 DEBUG nova.compute.manager [req-c2a43372-1e7a-4203-b2ac-4c68d0c20116 req-fee5b435-59f0-400e-a3c7-c7686a0c33ee 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] No waiting events found dispatching network-vif-plugged-f3274143-07c8-4956-b27c-98507a2443b2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:18:08 compute-0 nova_compute[186018]: 2026-01-05 21:18:08.133 186022 WARNING nova.compute.manager [req-c2a43372-1e7a-4203-b2ac-4c68d0c20116 req-fee5b435-59f0-400e-a3c7-c7686a0c33ee 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Received unexpected event network-vif-plugged-f3274143-07c8-4956-b27c-98507a2443b2 for instance with vm_state active and task_state deleting.
Jan 05 21:18:08 compute-0 nova_compute[186018]: 2026-01-05 21:18:08.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:18:08 compute-0 nova_compute[186018]: 2026-01-05 21:18:08.468 186022 DEBUG nova.compute.manager [req-4ad4f47c-c323-4787-bfed-171205ae3e0d req-8e19acb5-5383-4ba8-9c22-ef2d8c1f79d4 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Received event network-changed-f3274143-07c8-4956-b27c-98507a2443b2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:18:08 compute-0 nova_compute[186018]: 2026-01-05 21:18:08.469 186022 DEBUG nova.compute.manager [req-4ad4f47c-c323-4787-bfed-171205ae3e0d req-8e19acb5-5383-4ba8-9c22-ef2d8c1f79d4 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Refreshing instance network info cache due to event network-changed-f3274143-07c8-4956-b27c-98507a2443b2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:18:08 compute-0 nova_compute[186018]: 2026-01-05 21:18:08.469 186022 DEBUG oslo_concurrency.lockutils [req-4ad4f47c-c323-4787-bfed-171205ae3e0d req-8e19acb5-5383-4ba8-9c22-ef2d8c1f79d4 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:18:08 compute-0 nova_compute[186018]: 2026-01-05 21:18:08.469 186022 DEBUG oslo_concurrency.lockutils [req-4ad4f47c-c323-4787-bfed-171205ae3e0d req-8e19acb5-5383-4ba8-9c22-ef2d8c1f79d4 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:18:08 compute-0 nova_compute[186018]: 2026-01-05 21:18:08.470 186022 DEBUG nova.network.neutron [req-4ad4f47c-c323-4787-bfed-171205ae3e0d req-8e19acb5-5383-4ba8-9c22-ef2d8c1f79d4 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Refreshing network info cache for port f3274143-07c8-4956-b27c-98507a2443b2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:18:08 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:08.859 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '7'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:18:08 compute-0 nova_compute[186018]: 2026-01-05 21:18:08.864 186022 DEBUG nova.network.neutron [-] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:18:08 compute-0 nova_compute[186018]: 2026-01-05 21:18:08.882 186022 INFO nova.compute.manager [-] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Took 2.90 seconds to deallocate network for instance.
Jan 05 21:18:08 compute-0 nova_compute[186018]: 2026-01-05 21:18:08.932 186022 DEBUG oslo_concurrency.lockutils [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:18:08 compute-0 nova_compute[186018]: 2026-01-05 21:18:08.932 186022 DEBUG oslo_concurrency.lockutils [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:18:09 compute-0 nova_compute[186018]: 2026-01-05 21:18:09.068 186022 DEBUG nova.compute.provider_tree [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:18:09 compute-0 nova_compute[186018]: 2026-01-05 21:18:09.087 186022 DEBUG nova.scheduler.client.report [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:18:09 compute-0 nova_compute[186018]: 2026-01-05 21:18:09.113 186022 DEBUG oslo_concurrency.lockutils [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.181s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:18:09 compute-0 nova_compute[186018]: 2026-01-05 21:18:09.160 186022 INFO nova.scheduler.client.report [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Deleted allocations for instance d0894ce8-3815-41f8-a495-2329081a9ed2
Jan 05 21:18:09 compute-0 nova_compute[186018]: 2026-01-05 21:18:09.249 186022 DEBUG oslo_concurrency.lockutils [None req-bd056f2a-a231-4675-9736-a59912f9b38f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "d0894ce8-3815-41f8-a495-2329081a9ed2" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.735s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:18:09 compute-0 nova_compute[186018]: 2026-01-05 21:18:09.464 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:18:09 compute-0 nova_compute[186018]: 2026-01-05 21:18:09.465 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:18:09 compute-0 nova_compute[186018]: 2026-01-05 21:18:09.799 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-bc5c255f-3071-4754-9c2a-302e6237171f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:18:09 compute-0 nova_compute[186018]: 2026-01-05 21:18:09.800 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-bc5c255f-3071-4754-9c2a-302e6237171f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:18:09 compute-0 nova_compute[186018]: 2026-01-05 21:18:09.801 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:18:10 compute-0 podman[245161]: 2026-01-05 21:18:10.760491097 +0000 UTC m=+0.113640131 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, container_name=kepler, release-0.7.12=, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, build-date=2024-09-18T21:23:30, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, distribution-scope=public, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc.)
Jan 05 21:18:10 compute-0 nova_compute[186018]: 2026-01-05 21:18:10.861 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:11 compute-0 nova_compute[186018]: 2026-01-05 21:18:11.019 186022 DEBUG nova.network.neutron [req-4ad4f47c-c323-4787-bfed-171205ae3e0d req-8e19acb5-5383-4ba8-9c22-ef2d8c1f79d4 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Updated VIF entry in instance network info cache for port f3274143-07c8-4956-b27c-98507a2443b2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:18:11 compute-0 nova_compute[186018]: 2026-01-05 21:18:11.021 186022 DEBUG nova.network.neutron [req-4ad4f47c-c323-4787-bfed-171205ae3e0d req-8e19acb5-5383-4ba8-9c22-ef2d8c1f79d4 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Updating instance_info_cache with network_info: [{"id": "f3274143-07c8-4956-b27c-98507a2443b2", "address": "fa:16:3e:13:ee:71", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.216", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapf3274143-07", "ovs_interfaceid": "f3274143-07c8-4956-b27c-98507a2443b2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:18:11 compute-0 nova_compute[186018]: 2026-01-05 21:18:11.045 186022 DEBUG oslo_concurrency.lockutils [req-4ad4f47c-c323-4787-bfed-171205ae3e0d req-8e19acb5-5383-4ba8-9c22-ef2d8c1f79d4 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-d0894ce8-3815-41f8-a495-2329081a9ed2" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:18:12 compute-0 nova_compute[186018]: 2026-01-05 21:18:12.274 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:12 compute-0 podman[245179]: 2026-01-05 21:18:12.752529271 +0000 UTC m=+0.096761368 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4)
Jan 05 21:18:12 compute-0 nova_compute[186018]: 2026-01-05 21:18:12.977 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Updating instance_info_cache with network_info: [{"id": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "address": "fa:16:3e:22:cf:e6", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.15", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb09e12-63", "ovs_interfaceid": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:18:12 compute-0 nova_compute[186018]: 2026-01-05 21:18:12.997 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-bc5c255f-3071-4754-9c2a-302e6237171f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:18:12 compute-0 nova_compute[186018]: 2026-01-05 21:18:12.998 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:18:12 compute-0 nova_compute[186018]: 2026-01-05 21:18:12.998 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.492 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.492 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.492 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.493 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.612 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.692 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.694 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.756 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.759 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.836 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.837 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.891 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.897 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.954 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:18:14 compute-0 nova_compute[186018]: 2026-01-05 21:18:14.956 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:18:15 compute-0 nova_compute[186018]: 2026-01-05 21:18:15.062 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:18:15 compute-0 nova_compute[186018]: 2026-01-05 21:18:15.064 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:18:15 compute-0 nova_compute[186018]: 2026-01-05 21:18:15.156 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json" returned: 0 in 0.092s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:18:15 compute-0 nova_compute[186018]: 2026-01-05 21:18:15.158 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:18:15 compute-0 nova_compute[186018]: 2026-01-05 21:18:15.258 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:18:15 compute-0 nova_compute[186018]: 2026-01-05 21:18:15.265 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:18:15 compute-0 nova_compute[186018]: 2026-01-05 21:18:15.320 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:18:15 compute-0 nova_compute[186018]: 2026-01-05 21:18:15.321 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:18:15 compute-0 nova_compute[186018]: 2026-01-05 21:18:15.398 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:18:15 compute-0 nova_compute[186018]: 2026-01-05 21:18:15.399 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:18:15 compute-0 nova_compute[186018]: 2026-01-05 21:18:15.488 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:18:15 compute-0 nova_compute[186018]: 2026-01-05 21:18:15.489 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:18:15 compute-0 nova_compute[186018]: 2026-01-05 21:18:15.557 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:18:15 compute-0 nova_compute[186018]: 2026-01-05 21:18:15.863 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:16 compute-0 nova_compute[186018]: 2026-01-05 21:18:16.069 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:18:16 compute-0 nova_compute[186018]: 2026-01-05 21:18:16.071 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4781MB free_disk=72.37905883789062GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:18:16 compute-0 nova_compute[186018]: 2026-01-05 21:18:16.072 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:18:16 compute-0 nova_compute[186018]: 2026-01-05 21:18:16.072 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:18:16 compute-0 nova_compute[186018]: 2026-01-05 21:18:16.162 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:18:16 compute-0 nova_compute[186018]: 2026-01-05 21:18:16.163 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance bc5c255f-3071-4754-9c2a-302e6237171f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:18:16 compute-0 nova_compute[186018]: 2026-01-05 21:18:16.163 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 4f980272-c18f-4c66-9c04-8a07a7115de7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:18:16 compute-0 nova_compute[186018]: 2026-01-05 21:18:16.163 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:18:16 compute-0 nova_compute[186018]: 2026-01-05 21:18:16.163 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:18:16 compute-0 nova_compute[186018]: 2026-01-05 21:18:16.264 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:18:16 compute-0 nova_compute[186018]: 2026-01-05 21:18:16.280 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:18:16 compute-0 nova_compute[186018]: 2026-01-05 21:18:16.301 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:18:16 compute-0 nova_compute[186018]: 2026-01-05 21:18:16.301 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.229s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:18:17 compute-0 nova_compute[186018]: 2026-01-05 21:18:17.278 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:18 compute-0 nova_compute[186018]: 2026-01-05 21:18:18.301 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:18:18 compute-0 nova_compute[186018]: 2026-01-05 21:18:18.302 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:18:18 compute-0 nova_compute[186018]: 2026-01-05 21:18:18.457 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:18:19 compute-0 podman[245244]: 2026-01-05 21:18:19.800802908 +0000 UTC m=+0.122365009 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, release=1755695350, vcs-type=git, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, managed_by=edpm_ansible, config_id=openstack_network_exporter)
Jan 05 21:18:19 compute-0 podman[245243]: 2026-01-05 21:18:19.816989693 +0000 UTC m=+0.152130220 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 21:18:20 compute-0 nova_compute[186018]: 2026-01-05 21:18:20.830 186022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767647885.8291888, d0894ce8-3815-41f8-a495-2329081a9ed2 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:18:20 compute-0 nova_compute[186018]: 2026-01-05 21:18:20.831 186022 INFO nova.compute.manager [-] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] VM Stopped (Lifecycle Event)
Jan 05 21:18:20 compute-0 nova_compute[186018]: 2026-01-05 21:18:20.855 186022 DEBUG nova.compute.manager [None req-5a04a51b-fd0e-4214-bcb7-13013a70d30f - - - - - -] [instance: d0894ce8-3815-41f8-a495-2329081a9ed2] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:18:20 compute-0 nova_compute[186018]: 2026-01-05 21:18:20.867 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:22 compute-0 nova_compute[186018]: 2026-01-05 21:18:22.279 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:25 compute-0 podman[245286]: 2026-01-05 21:18:25.79974993 +0000 UTC m=+0.124065224 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:18:25 compute-0 podman[245285]: 2026-01-05 21:18:25.823642077 +0000 UTC m=+0.151473293 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 05 21:18:25 compute-0 nova_compute[186018]: 2026-01-05 21:18:25.870 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:27 compute-0 nova_compute[186018]: 2026-01-05 21:18:27.282 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:29 compute-0 podman[202426]: time="2026-01-05T21:18:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:18:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:18:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:18:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:18:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4361 "" "Go-http-client/1.1"
Jan 05 21:18:30 compute-0 nova_compute[186018]: 2026-01-05 21:18:30.872 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:31 compute-0 openstack_network_exporter[205720]: ERROR   21:18:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:18:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:18:31 compute-0 openstack_network_exporter[205720]: ERROR   21:18:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:18:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:18:32 compute-0 nova_compute[186018]: 2026-01-05 21:18:32.284 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:33 compute-0 podman[245326]: 2026-01-05 21:18:33.772188821 +0000 UTC m=+0.112795649 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:18:35 compute-0 nova_compute[186018]: 2026-01-05 21:18:35.876 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:37 compute-0 nova_compute[186018]: 2026-01-05 21:18:37.286 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:38 compute-0 podman[245347]: 2026-01-05 21:18:38.750903211 +0000 UTC m=+0.084082226 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 05 21:18:40 compute-0 nova_compute[186018]: 2026-01-05 21:18:40.880 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:41 compute-0 podman[245364]: 2026-01-05 21:18:41.754639843 +0000 UTC m=+0.104014039 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, vendor=Red Hat, Inc., container_name=kepler, distribution-scope=public, io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, name=ubi9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, com.redhat.component=ubi9-container, release=1214.1726694543, vcs-type=git, config_id=kepler, release-0.7.12=, architecture=x86_64, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 05 21:18:42 compute-0 ovn_controller[98229]: 2026-01-05T21:18:42Z|00059|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Jan 05 21:18:42 compute-0 nova_compute[186018]: 2026-01-05 21:18:42.287 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:42.853 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:18:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:42.853 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:18:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:18:42.854 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:18:43 compute-0 podman[245386]: 2026-01-05 21:18:43.74563479 +0000 UTC m=+0.097174469 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Jan 05 21:18:45 compute-0 sshd-session[245384]: Received disconnect from 36.255.220.229 port 40468:11:  [preauth]
Jan 05 21:18:45 compute-0 sshd-session[245384]: Disconnected from authenticating user root 36.255.220.229 port 40468 [preauth]
Jan 05 21:18:45 compute-0 nova_compute[186018]: 2026-01-05 21:18:45.883 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:47 compute-0 nova_compute[186018]: 2026-01-05 21:18:47.290 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:50 compute-0 podman[245408]: 2026-01-05 21:18:50.746215036 +0000 UTC m=+0.084256521 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, name=ubi9-minimal, config_id=openstack_network_exporter, io.openshift.expose-services=, version=9.6, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-type=git, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 05 21:18:50 compute-0 podman[245407]: 2026-01-05 21:18:50.780565376 +0000 UTC m=+0.120239764 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202)
Jan 05 21:18:50 compute-0 nova_compute[186018]: 2026-01-05 21:18:50.887 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:52 compute-0 nova_compute[186018]: 2026-01-05 21:18:52.293 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:55 compute-0 nova_compute[186018]: 2026-01-05 21:18:55.889 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:56 compute-0 podman[245451]: 2026-01-05 21:18:56.723284695 +0000 UTC m=+0.074083943 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:18:56 compute-0 podman[245452]: 2026-01-05 21:18:56.75589006 +0000 UTC m=+0.088556353 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:18:57 compute-0 nova_compute[186018]: 2026-01-05 21:18:57.297 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:18:59 compute-0 podman[202426]: time="2026-01-05T21:18:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:18:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:18:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:18:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:18:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4364 "" "Go-http-client/1.1"
Jan 05 21:19:00 compute-0 nova_compute[186018]: 2026-01-05 21:19:00.893 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:01 compute-0 openstack_network_exporter[205720]: ERROR   21:19:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:19:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:19:01 compute-0 openstack_network_exporter[205720]: ERROR   21:19:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:19:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:19:02 compute-0 nova_compute[186018]: 2026-01-05 21:19:02.300 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:04 compute-0 podman[245492]: 2026-01-05 21:19:04.739733049 +0000 UTC m=+0.082784272 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:19:05 compute-0 nova_compute[186018]: 2026-01-05 21:19:05.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:19:05 compute-0 nova_compute[186018]: 2026-01-05 21:19:05.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:19:05 compute-0 nova_compute[186018]: 2026-01-05 21:19:05.895 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:07 compute-0 nova_compute[186018]: 2026-01-05 21:19:07.304 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.783 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.784 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.784 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.784 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.794 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4f980272-c18f-4c66-9c04-8a07a7115de7', 'name': 'vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {'metering.server_group': 'a6371b97-6a0c-4b37-9443-eaf5410da4a4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.799 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'bc5c255f-3071-4754-9c2a-302e6237171f', 'name': 'vn-ezpxu27-aposstbqe4u5-3vxh7p6lsvtd-vnf-iw64z6vmzv3z', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000003', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {'metering.server_group': 'a6371b97-6a0c-4b37-9443-eaf5410da4a4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.806 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f64de408-e6d1-4f7f-9f94-e20a4c83a87a', 'name': 'test_0', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.807 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.807 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.807 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.807 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.808 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.809 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:19:07.807457) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.809 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.809 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.809 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.810 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.810 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:19:07.810031) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.816 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.packets volume: 14 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.820 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.825 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.825 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.826 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.826 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.826 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.826 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.826 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.826 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.826 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.826 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:19:07.826401) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.827 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.827 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.827 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.827 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.827 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.827 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.827 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.828 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.828 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.828 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.828 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:19:07.827939) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.828 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.829 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.829 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.829 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.829 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.829 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.830 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.830 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.830 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:19:07.829399) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.830 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.830 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.830 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.831 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.831 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.832 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.832 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.832 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.832 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.832 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.832 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:19:07.830459) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.832 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.832 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.833 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.833 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.833 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.833 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.833 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:19:07.832567) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.833 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.833 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.833 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.834 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.834 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.834 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.834 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.834 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.835 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.835 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.835 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.835 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.835 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.835 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.835 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:19:07.834112) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.836 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:19:07.835529) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.836 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.836 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.836 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.836 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.836 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.836 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.837 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.837 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:19:07.837045) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.861 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.861 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.861 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.885 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.allocation volume: 21569536 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.886 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.886 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.908 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.909 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.909 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.909 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.910 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.910 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.910 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.910 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.910 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.910 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.911 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.911 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.911 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.911 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.911 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.912 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.912 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.912 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.912 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.912 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.912 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.913 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.913 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.913 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.913 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.913 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.913 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:19:07.910366) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:19:07.912316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.915 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:19:07.913957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.933 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/memory.usage volume: 49.1015625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.952 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/memory.usage volume: 48.921875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.976 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/memory.usage volume: 48.73046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.977 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.977 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.977 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.977 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.977 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.978 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.978 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.978 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.978 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.979 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.979 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.979 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.979 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.979 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.979 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.980 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.980 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:19:07.977957) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.980 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.980 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.980 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.981 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.981 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.981 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.981 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.982 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.982 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.982 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.982 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.982 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.982 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.983 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.983 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:19:07.979880) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:07.983 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:19:07.983064) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.047 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.048 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.048 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.115 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.116 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.116 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.173 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.173 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.174 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.174 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.174 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.174 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.174 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.175 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.175 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.175 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.bytes volume: 1570 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.175 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.175 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes volume: 2220 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.176 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.176 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.176 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.176 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.176 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.176 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.176 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.latency volume: 461858933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.177 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.latency volume: 95970893 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:19:08.175124) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.177 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.latency volume: 69940491 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.177 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:19:08.176680) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.177 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.latency volume: 420422303 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.177 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.latency volume: 95348408 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.177 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.latency volume: 83683963 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.178 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 488988741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.178 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 83667442 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.178 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 61020876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.178 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.179 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.179 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.179 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.179 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.179 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.179 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.179 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:19:08.179413) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.179 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.180 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.180 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.180 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.180 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.180 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.180 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.181 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.181 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.181 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.181 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.181 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.181 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.181 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.182 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.182 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.182 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.182 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.usage volume: 21299200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.182 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.183 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.183 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:19:08.181952) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.183 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.184 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.184 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.184 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.184 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.184 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.184 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.185 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.185 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.185 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.bytes volume: 41828352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.185 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.185 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.185 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.bytes volume: 41803776 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.186 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.186 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.186 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.186 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:19:08.185079) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.186 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.187 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.187 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.187 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.187 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.187 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.187 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.187 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.188 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/cpu volume: 35810000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:19:08.187882) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.188 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/cpu volume: 39050000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.188 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/cpu volume: 43420000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.188 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.188 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.188 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.189 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.189 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.189 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:19:08.189180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.189 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.latency volume: 1129111979 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.189 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.latency volume: 12951810 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.189 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.190 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.latency volume: 1181074077 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.190 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.latency volume: 12113149 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.190 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.190 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 1391100422 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.190 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 11839143 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.190 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.191 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.191 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.191 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.191 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.191 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.192 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.192 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:19:08.191783) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.192 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.requests volume: 235 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.192 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.192 14 DEBUG ceilometer.compute.pollsters [-] bc5c255f-3071-4754-9c2a-302e6237171f/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.193 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.193 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.193 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.193 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.194 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.195 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:19:08.196 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:19:09 compute-0 nova_compute[186018]: 2026-01-05 21:19:09.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:19:09 compute-0 podman[245518]: 2026-01-05 21:19:09.73380111 +0000 UTC m=+0.078730716 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi)
Jan 05 21:19:10 compute-0 nova_compute[186018]: 2026-01-05 21:19:10.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:19:10 compute-0 nova_compute[186018]: 2026-01-05 21:19:10.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:19:10 compute-0 nova_compute[186018]: 2026-01-05 21:19:10.898 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:11 compute-0 nova_compute[186018]: 2026-01-05 21:19:11.843 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:19:11 compute-0 nova_compute[186018]: 2026-01-05 21:19:11.844 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:19:11 compute-0 nova_compute[186018]: 2026-01-05 21:19:11.844 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:19:12 compute-0 nova_compute[186018]: 2026-01-05 21:19:12.306 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:12 compute-0 podman[245538]: 2026-01-05 21:19:12.76910749 +0000 UTC m=+0.102201571 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, managed_by=edpm_ansible, io.buildah.version=1.29.0, architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, version=9.4, name=ubi9, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=)
Jan 05 21:19:14 compute-0 podman[245557]: 2026-01-05 21:19:14.766034933 +0000 UTC m=+0.111667429 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 21:19:14 compute-0 nova_compute[186018]: 2026-01-05 21:19:14.982 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Updating instance_info_cache with network_info: [{"id": "6fba2106-2ecf-47b1-ba86-3ca344528342", "address": "fa:16:3e:71:37:b5", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fba2106-2e", "ovs_interfaceid": "6fba2106-2ecf-47b1-ba86-3ca344528342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:19:15 compute-0 nova_compute[186018]: 2026-01-05 21:19:15.901 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.139 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.140 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.141 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.141 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.164 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.165 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.165 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.165 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.282 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.364 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.367 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.430 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.432 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.514 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.516 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.599 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.607 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.683 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.685 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.797 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk --force-share --output=json" returned: 0 in 0.113s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.798 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.896 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:19:16 compute-0 nova_compute[186018]: 2026-01-05 21:19:16.897 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:19:17 compute-0 nova_compute[186018]: 2026-01-05 21:19:17.003 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f/disk.eph0 --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:19:17 compute-0 nova_compute[186018]: 2026-01-05 21:19:17.017 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:19:17 compute-0 nova_compute[186018]: 2026-01-05 21:19:17.115 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:19:17 compute-0 nova_compute[186018]: 2026-01-05 21:19:17.118 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:19:17 compute-0 nova_compute[186018]: 2026-01-05 21:19:17.185 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:19:17 compute-0 nova_compute[186018]: 2026-01-05 21:19:17.188 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:19:17 compute-0 nova_compute[186018]: 2026-01-05 21:19:17.251 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:19:17 compute-0 nova_compute[186018]: 2026-01-05 21:19:17.253 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:19:17 compute-0 nova_compute[186018]: 2026-01-05 21:19:17.310 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:17 compute-0 nova_compute[186018]: 2026-01-05 21:19:17.336 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.083s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:19:17 compute-0 nova_compute[186018]: 2026-01-05 21:19:17.969 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:19:17 compute-0 nova_compute[186018]: 2026-01-05 21:19:17.972 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4789MB free_disk=72.37905883789062GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:19:17 compute-0 nova_compute[186018]: 2026-01-05 21:19:17.973 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:19:17 compute-0 nova_compute[186018]: 2026-01-05 21:19:17.974 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:19:18 compute-0 nova_compute[186018]: 2026-01-05 21:19:18.089 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:19:18 compute-0 nova_compute[186018]: 2026-01-05 21:19:18.089 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance bc5c255f-3071-4754-9c2a-302e6237171f actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:19:18 compute-0 nova_compute[186018]: 2026-01-05 21:19:18.089 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 4f980272-c18f-4c66-9c04-8a07a7115de7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:19:18 compute-0 nova_compute[186018]: 2026-01-05 21:19:18.090 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:19:18 compute-0 nova_compute[186018]: 2026-01-05 21:19:18.090 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:19:18 compute-0 nova_compute[186018]: 2026-01-05 21:19:18.178 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:19:18 compute-0 nova_compute[186018]: 2026-01-05 21:19:18.191 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:19:18 compute-0 nova_compute[186018]: 2026-01-05 21:19:18.193 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:19:18 compute-0 nova_compute[186018]: 2026-01-05 21:19:18.193 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.219s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:19:18 compute-0 nova_compute[186018]: 2026-01-05 21:19:18.513 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:19:18 compute-0 nova_compute[186018]: 2026-01-05 21:19:18.513 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:19:18 compute-0 nova_compute[186018]: 2026-01-05 21:19:18.514 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:19:18 compute-0 nova_compute[186018]: 2026-01-05 21:19:18.514 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:19:20 compute-0 nova_compute[186018]: 2026-01-05 21:19:20.905 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:21 compute-0 podman[245615]: 2026-01-05 21:19:21.788521483 +0000 UTC m=+0.120429209 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, release=1755695350, maintainer=Red Hat, Inc., managed_by=edpm_ansible, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container)
Jan 05 21:19:21 compute-0 podman[245614]: 2026-01-05 21:19:21.799212813 +0000 UTC m=+0.146688147 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.build-date=20251202)
Jan 05 21:19:22 compute-0 nova_compute[186018]: 2026-01-05 21:19:22.313 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:25 compute-0 nova_compute[186018]: 2026-01-05 21:19:25.908 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:27 compute-0 nova_compute[186018]: 2026-01-05 21:19:27.315 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:27 compute-0 podman[245658]: 2026-01-05 21:19:27.794067047 +0000 UTC m=+0.114881594 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:19:27 compute-0 podman[245657]: 2026-01-05 21:19:27.822149763 +0000 UTC m=+0.147037576 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 21:19:29 compute-0 podman[202426]: time="2026-01-05T21:19:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:19:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:19:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:19:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:19:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4359 "" "Go-http-client/1.1"
Jan 05 21:19:30 compute-0 nova_compute[186018]: 2026-01-05 21:19:30.912 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:31 compute-0 openstack_network_exporter[205720]: ERROR   21:19:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:19:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:19:31 compute-0 openstack_network_exporter[205720]: ERROR   21:19:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:19:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:19:32 compute-0 nova_compute[186018]: 2026-01-05 21:19:32.320 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:35 compute-0 podman[245697]: 2026-01-05 21:19:35.750104236 +0000 UTC m=+0.092779114 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:19:35 compute-0 nova_compute[186018]: 2026-01-05 21:19:35.916 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:37 compute-0 nova_compute[186018]: 2026-01-05 21:19:37.322 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:40 compute-0 podman[245720]: 2026-01-05 21:19:40.788671015 +0000 UTC m=+0.127801022 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 05 21:19:40 compute-0 nova_compute[186018]: 2026-01-05 21:19:40.920 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:42 compute-0 nova_compute[186018]: 2026-01-05 21:19:42.325 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:19:42.854 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:19:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:19:42.855 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:19:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:19:42.855 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:19:43 compute-0 podman[245739]: 2026-01-05 21:19:43.757943255 +0000 UTC m=+0.104189313 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, release=1214.1726694543, distribution-scope=public, version=9.4, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, config_id=kepler, com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=kepler)
Jan 05 21:19:45 compute-0 podman[245758]: 2026-01-05 21:19:45.754737303 +0000 UTC m=+0.103173716 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:19:45 compute-0 nova_compute[186018]: 2026-01-05 21:19:45.923 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:47 compute-0 nova_compute[186018]: 2026-01-05 21:19:47.327 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:50 compute-0 nova_compute[186018]: 2026-01-05 21:19:50.926 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:52 compute-0 nova_compute[186018]: 2026-01-05 21:19:52.331 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:52 compute-0 podman[245781]: 2026-01-05 21:19:52.73895554 +0000 UTC m=+0.082372651 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, release=1755695350, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, architecture=x86_64, vcs-type=git, vendor=Red Hat, Inc., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.openshift.expose-services=, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container)
Jan 05 21:19:52 compute-0 podman[245780]: 2026-01-05 21:19:52.772599663 +0000 UTC m=+0.122217656 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:19:55 compute-0 nova_compute[186018]: 2026-01-05 21:19:55.930 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:57 compute-0 nova_compute[186018]: 2026-01-05 21:19:57.334 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:19:58 compute-0 podman[245825]: 2026-01-05 21:19:58.734687717 +0000 UTC m=+0.071154517 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:19:58 compute-0 podman[245824]: 2026-01-05 21:19:58.76340087 +0000 UTC m=+0.099093280 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:19:59 compute-0 podman[202426]: time="2026-01-05T21:19:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:19:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:19:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:19:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:19:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4367 "" "Go-http-client/1.1"
Jan 05 21:20:00 compute-0 nova_compute[186018]: 2026-01-05 21:20:00.935 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:01 compute-0 openstack_network_exporter[205720]: ERROR   21:20:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:20:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:20:01 compute-0 openstack_network_exporter[205720]: ERROR   21:20:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:20:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:20:02 compute-0 nova_compute[186018]: 2026-01-05 21:20:02.337 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:04 compute-0 nova_compute[186018]: 2026-01-05 21:20:04.688 186022 DEBUG oslo_concurrency.lockutils [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "bc5c255f-3071-4754-9c2a-302e6237171f" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:20:04 compute-0 nova_compute[186018]: 2026-01-05 21:20:04.689 186022 DEBUG oslo_concurrency.lockutils [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "bc5c255f-3071-4754-9c2a-302e6237171f" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:20:04 compute-0 nova_compute[186018]: 2026-01-05 21:20:04.689 186022 DEBUG oslo_concurrency.lockutils [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:20:04 compute-0 nova_compute[186018]: 2026-01-05 21:20:04.690 186022 DEBUG oslo_concurrency.lockutils [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:20:04 compute-0 nova_compute[186018]: 2026-01-05 21:20:04.690 186022 DEBUG oslo_concurrency.lockutils [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:20:04 compute-0 nova_compute[186018]: 2026-01-05 21:20:04.692 186022 INFO nova.compute.manager [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Terminating instance
Jan 05 21:20:04 compute-0 nova_compute[186018]: 2026-01-05 21:20:04.694 186022 DEBUG nova.compute.manager [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 21:20:04 compute-0 kernel: tap2fb09e12-63 (unregistering): left promiscuous mode
Jan 05 21:20:04 compute-0 NetworkManager[56598]: <info>  [1767648004.7938] device (tap2fb09e12-63): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 21:20:04 compute-0 nova_compute[186018]: 2026-01-05 21:20:04.805 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:04 compute-0 ovn_controller[98229]: 2026-01-05T21:20:04Z|00060|binding|INFO|Releasing lport 2fb09e12-6360-4c5c-be29-1c3782724ceb from this chassis (sb_readonly=0)
Jan 05 21:20:04 compute-0 ovn_controller[98229]: 2026-01-05T21:20:04Z|00061|binding|INFO|Setting lport 2fb09e12-6360-4c5c-be29-1c3782724ceb down in Southbound
Jan 05 21:20:04 compute-0 ovn_controller[98229]: 2026-01-05T21:20:04Z|00062|binding|INFO|Removing iface tap2fb09e12-63 ovn-installed in OVS
Jan 05 21:20:04 compute-0 nova_compute[186018]: 2026-01-05 21:20:04.814 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:04 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:04.820 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:22:cf:e6 192.168.0.15'], port_security=['fa:16:3e:22:cf:e6 192.168.0.15'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-3m37qezpxu27-aposstbqe4u5-3vxh7p6lsvtd-port-jshloneuhom7', 'neutron:cidrs': '192.168.0.15/24', 'neutron:device_id': 'bc5c255f-3071-4754-9c2a-302e6237171f', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-3m37qezpxu27-aposstbqe4u5-3vxh7p6lsvtd-port-jshloneuhom7', 'neutron:project_id': '704814115a61471f9b45484171f67b5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '02c7eb5a-98f1-49fb-80bc-9ee05faa964b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.234', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0df9bc1d-5579-4059-ac66-a97b4c7350db, chassis=[], tunnel_key=5, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=2fb09e12-6360-4c5c-be29-1c3782724ceb) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:20:04 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:04.823 107689 INFO neutron.agent.ovn.metadata.agent [-] Port 2fb09e12-6360-4c5c-be29-1c3782724ceb in datapath b871481f-0445-42f2-8b6a-2e8572ae5b49 unbound from our chassis
Jan 05 21:20:04 compute-0 nova_compute[186018]: 2026-01-05 21:20:04.825 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:04 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:04.826 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b871481f-0445-42f2-8b6a-2e8572ae5b49
Jan 05 21:20:04 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:04.853 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[5d5be6ae-1d5a-40d6-b873-f79f05493832]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:20:04 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Deactivated successfully.
Jan 05 21:20:04 compute-0 systemd[1]: machine-qemu\x2d3\x2dinstance\x2d00000003.scope: Consumed 1min 37.996s CPU time.
Jan 05 21:20:04 compute-0 systemd-machined[157312]: Machine qemu-3-instance-00000003 terminated.
Jan 05 21:20:04 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:04.885 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[a9ba52b0-959d-4a79-8ebc-c0c086d68ee1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:20:04 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:04.888 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[89ff5455-b7a8-4c70-a7be-4e85877ad28d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:20:04 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:04.916 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[9ae43983-c940-4efc-b5e0-09e2386d03ac]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:20:04 compute-0 nova_compute[186018]: 2026-01-05 21:20:04.922 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:04 compute-0 nova_compute[186018]: 2026-01-05 21:20:04.927 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:04 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:04.941 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[5c19957f-8bf1-4f6e-a4f8-38dddedfe9ed]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb871481f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:f0:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 13, 'rx_bytes': 574, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 13, 'rx_bytes': 574, 'tx_bytes': 690, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393151, 'reachable_time': 34860, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 245882, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:20:04 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:04.959 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[f58a743f-63d3-41ce-ad44-bd5d37113fd0]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb871481f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393170, 'tstamp': 393170}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245892, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapb871481f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393175, 'tstamp': 393175}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 245892, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:20:04 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:04.961 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb871481f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:20:04 compute-0 nova_compute[186018]: 2026-01-05 21:20:04.963 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:04 compute-0 nova_compute[186018]: 2026-01-05 21:20:04.968 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:04 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:04.969 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb871481f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:20:04 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:04.969 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:20:04 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:04.969 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb871481f-00, col_values=(('external_ids', {'iface-id': 'a16ac18f-2e71-4427-b368-840ecfba3d33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:20:04 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:04.969 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:20:04 compute-0 nova_compute[186018]: 2026-01-05 21:20:04.985 186022 INFO nova.virt.libvirt.driver [-] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Instance destroyed successfully.
Jan 05 21:20:04 compute-0 nova_compute[186018]: 2026-01-05 21:20:04.985 186022 DEBUG nova.objects.instance [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lazy-loading 'resources' on Instance uuid bc5c255f-3071-4754-9c2a-302e6237171f obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.185 186022 DEBUG nova.virt.libvirt.vif [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:12:12Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-ezpxu27-aposstbqe4u5-3vxh7p6lsvtd-vnf-iw64z6vmzv3z',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ezpxu27-aposstbqe4u5-3vxh7p6lsvtd-vnf-iw64z6vmzv3z',id=3,image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-05T21:12:23Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a6371b97-6a0c-4b37-9443-eaf5410da4a4'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='704814115a61471f9b45484171f67b5f',ramdisk_id='',reservation_id='r-le2fg87b',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T21:12:23Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0wMjA4NzUyMzk2NjE1NTEzNjUzPT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTAyMDg3NTIzOTY2MTU1MTM2NTM9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MDIwODc1MjM5NjYxNTUxMzY1Mz09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTAyMDg3NTIzOTY2MTU1MTM2NTM9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0wMjA4NzUyMzk2NjE1NTEzNjUzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0wMjA4NzUyMzk2NjE1NTEzNjUzPT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 05 21:20:05 compute-0 nova_compute[186018]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MDIwODc1MjM5NjYxNTUxMzY1Mz09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTAyMDg3NTIzOTY2MTU1MTM2NTM9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0wMjA4NzUyMzk2NjE1NTEzNjUzPT0tLQo=',user_id='41f377b42540490198f271301cf5fe90',uuid=bc5c255f-3071-4754-9c2a-302e6237171f,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "address": "fa:16:3e:22:cf:e6", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.15", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb09e12-63", "ovs_interfaceid": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.185 186022 DEBUG nova.network.os_vif_util [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converting VIF {"id": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "address": "fa:16:3e:22:cf:e6", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.15", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.234", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap2fb09e12-63", "ovs_interfaceid": "2fb09e12-6360-4c5c-be29-1c3782724ceb", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.186 186022 DEBUG nova.network.os_vif_util [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:22:cf:e6,bridge_name='br-int',has_traffic_filtering=True,id=2fb09e12-6360-4c5c-be29-1c3782724ceb,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2fb09e12-63') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.187 186022 DEBUG os_vif [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:cf:e6,bridge_name='br-int',has_traffic_filtering=True,id=2fb09e12-6360-4c5c-be29-1c3782724ceb,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2fb09e12-63') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.189 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.190 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap2fb09e12-63, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.191 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.193 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.197 186022 INFO os_vif [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:22:cf:e6,bridge_name='br-int',has_traffic_filtering=True,id=2fb09e12-6360-4c5c-be29-1c3782724ceb,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap2fb09e12-63')
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.199 186022 INFO nova.virt.libvirt.driver [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Deleting instance files /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f_del
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.200 186022 INFO nova.virt.libvirt.driver [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Deletion of /var/lib/nova/instances/bc5c255f-3071-4754-9c2a-302e6237171f_del complete
Jan 05 21:20:05 compute-0 rsyslogd[237695]: message too long (8192) with configured size 8096, begin of message is: 2026-01-05 21:20:05.185 186022 DEBUG nova.virt.libvirt.vif [None req-401b0875-f7 [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.264 186022 INFO nova.compute.manager [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Took 0.57 seconds to destroy the instance on the hypervisor.
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.265 186022 DEBUG oslo.service.loopingcall [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.265 186022 DEBUG nova.compute.manager [-] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.266 186022 DEBUG nova.network.neutron [-] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.420 186022 DEBUG nova.compute.manager [req-97b1f48d-92b5-4633-8fa3-1e5e0417f7ab req-1e31c2f2-c8b9-4c70-8f2a-5e0c5b93d26a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Received event network-vif-unplugged-2fb09e12-6360-4c5c-be29-1c3782724ceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.421 186022 DEBUG oslo_concurrency.lockutils [req-97b1f48d-92b5-4633-8fa3-1e5e0417f7ab req-1e31c2f2-c8b9-4c70-8f2a-5e0c5b93d26a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.421 186022 DEBUG oslo_concurrency.lockutils [req-97b1f48d-92b5-4633-8fa3-1e5e0417f7ab req-1e31c2f2-c8b9-4c70-8f2a-5e0c5b93d26a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.422 186022 DEBUG oslo_concurrency.lockutils [req-97b1f48d-92b5-4633-8fa3-1e5e0417f7ab req-1e31c2f2-c8b9-4c70-8f2a-5e0c5b93d26a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.422 186022 DEBUG nova.compute.manager [req-97b1f48d-92b5-4633-8fa3-1e5e0417f7ab req-1e31c2f2-c8b9-4c70-8f2a-5e0c5b93d26a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] No waiting events found dispatching network-vif-unplugged-2fb09e12-6360-4c5c-be29-1c3782724ceb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.423 186022 DEBUG nova.compute.manager [req-97b1f48d-92b5-4633-8fa3-1e5e0417f7ab req-1e31c2f2-c8b9-4c70-8f2a-5e0c5b93d26a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Received event network-vif-unplugged-2fb09e12-6360-4c5c-be29-1c3782724ceb for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 05 21:20:05 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:05.452 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=8, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=7) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:20:05 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:05.454 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:20:05 compute-0 nova_compute[186018]: 2026-01-05 21:20:05.455 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:06 compute-0 nova_compute[186018]: 2026-01-05 21:20:06.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:20:06 compute-0 nova_compute[186018]: 2026-01-05 21:20:06.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:20:06 compute-0 podman[245901]: 2026-01-05 21:20:06.738796137 +0000 UTC m=+0.081016995 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.339 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.397 186022 DEBUG nova.network.neutron [-] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.417 186022 INFO nova.compute.manager [-] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Took 2.15 seconds to deallocate network for instance.
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.454 186022 DEBUG oslo_concurrency.lockutils [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.455 186022 DEBUG oslo_concurrency.lockutils [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.517 186022 DEBUG nova.compute.manager [req-6372ff08-8055-4982-8415-aeb2f53e35f8 req-dae02cee-4349-465f-9be4-978fc3504c2b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Received event network-vif-plugged-2fb09e12-6360-4c5c-be29-1c3782724ceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.518 186022 DEBUG oslo_concurrency.lockutils [req-6372ff08-8055-4982-8415-aeb2f53e35f8 req-dae02cee-4349-465f-9be4-978fc3504c2b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.518 186022 DEBUG oslo_concurrency.lockutils [req-6372ff08-8055-4982-8415-aeb2f53e35f8 req-dae02cee-4349-465f-9be4-978fc3504c2b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.518 186022 DEBUG oslo_concurrency.lockutils [req-6372ff08-8055-4982-8415-aeb2f53e35f8 req-dae02cee-4349-465f-9be4-978fc3504c2b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "bc5c255f-3071-4754-9c2a-302e6237171f-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.518 186022 DEBUG nova.compute.manager [req-6372ff08-8055-4982-8415-aeb2f53e35f8 req-dae02cee-4349-465f-9be4-978fc3504c2b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] No waiting events found dispatching network-vif-plugged-2fb09e12-6360-4c5c-be29-1c3782724ceb pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.519 186022 WARNING nova.compute.manager [req-6372ff08-8055-4982-8415-aeb2f53e35f8 req-dae02cee-4349-465f-9be4-978fc3504c2b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Received unexpected event network-vif-plugged-2fb09e12-6360-4c5c-be29-1c3782724ceb for instance with vm_state deleted and task_state None.
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.519 186022 DEBUG nova.compute.manager [req-6372ff08-8055-4982-8415-aeb2f53e35f8 req-dae02cee-4349-465f-9be4-978fc3504c2b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Received event network-changed-2fb09e12-6360-4c5c-be29-1c3782724ceb external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.519 186022 DEBUG nova.compute.manager [req-6372ff08-8055-4982-8415-aeb2f53e35f8 req-dae02cee-4349-465f-9be4-978fc3504c2b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Refreshing instance network info cache due to event network-changed-2fb09e12-6360-4c5c-be29-1c3782724ceb. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.519 186022 DEBUG oslo_concurrency.lockutils [req-6372ff08-8055-4982-8415-aeb2f53e35f8 req-dae02cee-4349-465f-9be4-978fc3504c2b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-bc5c255f-3071-4754-9c2a-302e6237171f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.519 186022 DEBUG oslo_concurrency.lockutils [req-6372ff08-8055-4982-8415-aeb2f53e35f8 req-dae02cee-4349-465f-9be4-978fc3504c2b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-bc5c255f-3071-4754-9c2a-302e6237171f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.520 186022 DEBUG nova.network.neutron [req-6372ff08-8055-4982-8415-aeb2f53e35f8 req-dae02cee-4349-465f-9be4-978fc3504c2b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Refreshing network info cache for port 2fb09e12-6360-4c5c-be29-1c3782724ceb _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.584 186022 DEBUG nova.compute.provider_tree [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.606 186022 DEBUG nova.scheduler.client.report [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.855 186022 DEBUG nova.network.neutron [req-6372ff08-8055-4982-8415-aeb2f53e35f8 req-dae02cee-4349-465f-9be4-978fc3504c2b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 21:20:07 compute-0 nova_compute[186018]: 2026-01-05 21:20:07.891 186022 DEBUG oslo_concurrency.lockutils [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.436s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:20:08 compute-0 nova_compute[186018]: 2026-01-05 21:20:08.088 186022 INFO nova.scheduler.client.report [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Deleted allocations for instance bc5c255f-3071-4754-9c2a-302e6237171f
Jan 05 21:20:08 compute-0 nova_compute[186018]: 2026-01-05 21:20:08.222 186022 DEBUG oslo_concurrency.lockutils [None req-401b0875-f747-4d5c-b405-459b27b38b79 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "bc5c255f-3071-4754-9c2a-302e6237171f" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.533s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:20:08 compute-0 nova_compute[186018]: 2026-01-05 21:20:08.505 186022 DEBUG nova.network.neutron [req-6372ff08-8055-4982-8415-aeb2f53e35f8 req-dae02cee-4349-465f-9be4-978fc3504c2b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:20:08 compute-0 nova_compute[186018]: 2026-01-05 21:20:08.632 186022 DEBUG oslo_concurrency.lockutils [req-6372ff08-8055-4982-8415-aeb2f53e35f8 req-dae02cee-4349-465f-9be4-978fc3504c2b 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-bc5c255f-3071-4754-9c2a-302e6237171f" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:20:10 compute-0 nova_compute[186018]: 2026-01-05 21:20:10.193 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:10 compute-0 nova_compute[186018]: 2026-01-05 21:20:10.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:20:10 compute-0 nova_compute[186018]: 2026-01-05 21:20:10.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:20:10 compute-0 nova_compute[186018]: 2026-01-05 21:20:10.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:20:10 compute-0 nova_compute[186018]: 2026-01-05 21:20:10.902 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:20:10 compute-0 nova_compute[186018]: 2026-01-05 21:20:10.902 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:20:10 compute-0 nova_compute[186018]: 2026-01-05 21:20:10.902 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:20:10 compute-0 nova_compute[186018]: 2026-01-05 21:20:10.902 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f64de408-e6d1-4f7f-9f94-e20a4c83a87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:20:11 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:11.456 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '8'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:20:11 compute-0 podman[245924]: 2026-01-05 21:20:11.757412254 +0000 UTC m=+0.103749722 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 05 21:20:12 compute-0 nova_compute[186018]: 2026-01-05 21:20:12.343 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:14 compute-0 nova_compute[186018]: 2026-01-05 21:20:14.367 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updating instance_info_cache with network_info: [{"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:20:14 compute-0 nova_compute[186018]: 2026-01-05 21:20:14.396 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:20:14 compute-0 nova_compute[186018]: 2026-01-05 21:20:14.397 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:20:14 compute-0 nova_compute[186018]: 2026-01-05 21:20:14.398 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:20:14 compute-0 podman[245943]: 2026-01-05 21:20:14.777832264 +0000 UTC m=+0.125131322 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1214.1726694543, vcs-type=git, config_id=kepler, architecture=x86_64, version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, name=ubi9, release-0.7.12=, distribution-scope=public, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 05 21:20:15 compute-0 nova_compute[186018]: 2026-01-05 21:20:15.194 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:15 compute-0 nova_compute[186018]: 2026-01-05 21:20:15.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:20:15 compute-0 nova_compute[186018]: 2026-01-05 21:20:15.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:20:15 compute-0 nova_compute[186018]: 2026-01-05 21:20:15.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.001 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.002 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.003 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.004 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.106 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.205 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.207 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:20:16 compute-0 podman[245964]: 2026-01-05 21:20:16.215363618 +0000 UTC m=+0.121023684 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.285 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.287 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.356 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.357 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.416 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.424 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.484 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.486 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.585 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.099s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.587 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.650 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.651 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:20:16 compute-0 nova_compute[186018]: 2026-01-05 21:20:16.733 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:20:17 compute-0 nova_compute[186018]: 2026-01-05 21:20:17.126 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:20:17 compute-0 nova_compute[186018]: 2026-01-05 21:20:17.128 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4964MB free_disk=72.40099716186523GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:20:17 compute-0 nova_compute[186018]: 2026-01-05 21:20:17.128 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:20:17 compute-0 nova_compute[186018]: 2026-01-05 21:20:17.129 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:20:17 compute-0 nova_compute[186018]: 2026-01-05 21:20:17.246 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:20:17 compute-0 nova_compute[186018]: 2026-01-05 21:20:17.246 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 4f980272-c18f-4c66-9c04-8a07a7115de7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:20:17 compute-0 nova_compute[186018]: 2026-01-05 21:20:17.247 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:20:17 compute-0 nova_compute[186018]: 2026-01-05 21:20:17.247 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:20:17 compute-0 nova_compute[186018]: 2026-01-05 21:20:17.308 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:20:17 compute-0 nova_compute[186018]: 2026-01-05 21:20:17.323 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:20:17 compute-0 nova_compute[186018]: 2026-01-05 21:20:17.347 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:17 compute-0 nova_compute[186018]: 2026-01-05 21:20:17.351 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:20:17 compute-0 nova_compute[186018]: 2026-01-05 21:20:17.352 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:20:19 compute-0 nova_compute[186018]: 2026-01-05 21:20:19.984 186022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767648004.9813457, bc5c255f-3071-4754-9c2a-302e6237171f => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:20:19 compute-0 nova_compute[186018]: 2026-01-05 21:20:19.985 186022 INFO nova.compute.manager [-] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] VM Stopped (Lifecycle Event)
Jan 05 21:20:20 compute-0 nova_compute[186018]: 2026-01-05 21:20:20.005 186022 DEBUG nova.compute.manager [None req-9e701f0c-8410-45f6-8e82-311e1feabbc0 - - - - - -] [instance: bc5c255f-3071-4754-9c2a-302e6237171f] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:20:20 compute-0 nova_compute[186018]: 2026-01-05 21:20:20.197 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:20 compute-0 nova_compute[186018]: 2026-01-05 21:20:20.352 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:20:20 compute-0 nova_compute[186018]: 2026-01-05 21:20:20.372 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:20:20 compute-0 nova_compute[186018]: 2026-01-05 21:20:20.372 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:20:20 compute-0 nova_compute[186018]: 2026-01-05 21:20:20.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:20:22 compute-0 nova_compute[186018]: 2026-01-05 21:20:22.349 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:23 compute-0 podman[246008]: 2026-01-05 21:20:23.771023529 +0000 UTC m=+0.106532764 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vendor=Red Hat, Inc., io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public)
Jan 05 21:20:23 compute-0 podman[246007]: 2026-01-05 21:20:23.830851858 +0000 UTC m=+0.168346385 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:20:25 compute-0 nova_compute[186018]: 2026-01-05 21:20:25.199 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:27 compute-0 nova_compute[186018]: 2026-01-05 21:20:27.351 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:29 compute-0 podman[246056]: 2026-01-05 21:20:29.732547849 +0000 UTC m=+0.085955165 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, managed_by=edpm_ansible, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:20:29 compute-0 podman[202426]: time="2026-01-05T21:20:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:20:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:20:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:20:29 compute-0 podman[246057]: 2026-01-05 21:20:29.752983945 +0000 UTC m=+0.093437221 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:20:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:20:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4368 "" "Go-http-client/1.1"
Jan 05 21:20:30 compute-0 nova_compute[186018]: 2026-01-05 21:20:30.202 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:31 compute-0 openstack_network_exporter[205720]: ERROR   21:20:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:20:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:20:31 compute-0 openstack_network_exporter[205720]: ERROR   21:20:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:20:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:20:32 compute-0 nova_compute[186018]: 2026-01-05 21:20:32.352 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:35 compute-0 nova_compute[186018]: 2026-01-05 21:20:35.205 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:37 compute-0 nova_compute[186018]: 2026-01-05 21:20:37.355 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:37 compute-0 podman[246097]: 2026-01-05 21:20:37.768658548 +0000 UTC m=+0.103965847 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:20:40 compute-0 nova_compute[186018]: 2026-01-05 21:20:40.206 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:41 compute-0 ovn_controller[98229]: 2026-01-05T21:20:41Z|00063|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Jan 05 21:20:42 compute-0 nova_compute[186018]: 2026-01-05 21:20:42.357 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:42 compute-0 podman[246121]: 2026-01-05 21:20:42.71200677 +0000 UTC m=+0.063158447 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Jan 05 21:20:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:42.855 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:20:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:42.855 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:20:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:20:42.856 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:20:44 compute-0 sshd-session[246140]: Accepted publickey for zuul from 38.102.83.164 port 47438 ssh2: RSA SHA256:mXJcJI31MVGiY6AzcXJ/p7r5TKU3Hv0WPE1JL6YqbII
Jan 05 21:20:44 compute-0 systemd-logind[788]: New session 29 of user zuul.
Jan 05 21:20:44 compute-0 systemd[1]: Started Session 29 of User zuul.
Jan 05 21:20:44 compute-0 sshd-session[246140]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 21:20:45 compute-0 nova_compute[186018]: 2026-01-05 21:20:45.209 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:45 compute-0 sudo[246331]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-gdjusnbfmaahosazjzdpamoxgfndzzeo ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767648044.8641646-59747-230311617747168/AnsiballZ_command.py'
Jan 05 21:20:45 compute-0 sudo[246331]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:20:45 compute-0 podman[246292]: 2026-01-05 21:20:45.488106043 +0000 UTC m=+0.084154917 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, distribution-scope=public, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, config_id=kepler, io.buildah.version=1.29.0, release=1214.1726694543, vcs-type=git, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30, name=ubi9, com.redhat.component=ubi9-container)
Jan 05 21:20:45 compute-0 python3[246338]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep ceilometer_agent_compute
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 21:20:45 compute-0 sudo[246331]: pam_unix(sudo:session): session closed for user root
Jan 05 21:20:46 compute-0 podman[246375]: 2026-01-05 21:20:46.772940084 +0000 UTC m=+0.119431673 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Jan 05 21:20:47 compute-0 nova_compute[186018]: 2026-01-05 21:20:47.359 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:50 compute-0 nova_compute[186018]: 2026-01-05 21:20:50.212 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:52 compute-0 nova_compute[186018]: 2026-01-05 21:20:52.361 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:54 compute-0 podman[246395]: 2026-01-05 21:20:54.852852482 +0000 UTC m=+0.190767323 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, managed_by=edpm_ansible, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, config_id=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, io.buildah.version=1.33.7, maintainer=Red Hat, Inc.)
Jan 05 21:20:54 compute-0 podman[246394]: 2026-01-05 21:20:54.892966314 +0000 UTC m=+0.240068496 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 05 21:20:55 compute-0 nova_compute[186018]: 2026-01-05 21:20:55.215 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:57 compute-0 nova_compute[186018]: 2026-01-05 21:20:57.370 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:20:59 compute-0 podman[202426]: time="2026-01-05T21:20:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:20:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:20:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:20:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:20:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4364 "" "Go-http-client/1.1"
Jan 05 21:21:00 compute-0 nova_compute[186018]: 2026-01-05 21:21:00.217 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:00 compute-0 podman[246436]: 2026-01-05 21:21:00.720526431 +0000 UTC m=+0.066651648 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 05 21:21:00 compute-0 podman[246437]: 2026-01-05 21:21:00.726728724 +0000 UTC m=+0.066726781 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:21:01 compute-0 openstack_network_exporter[205720]: ERROR   21:21:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:21:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:21:01 compute-0 openstack_network_exporter[205720]: ERROR   21:21:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:21:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:21:02 compute-0 nova_compute[186018]: 2026-01-05 21:21:02.373 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:03 compute-0 nova_compute[186018]: 2026-01-05 21:21:03.605 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "9ca460fc-2a39-402b-8690-29aad98e5b5e" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:21:03 compute-0 nova_compute[186018]: 2026-01-05 21:21:03.605 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "9ca460fc-2a39-402b-8690-29aad98e5b5e" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:21:03 compute-0 nova_compute[186018]: 2026-01-05 21:21:03.640 186022 DEBUG nova.compute.manager [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 21:21:03 compute-0 nova_compute[186018]: 2026-01-05 21:21:03.728 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:21:03 compute-0 nova_compute[186018]: 2026-01-05 21:21:03.729 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:21:03 compute-0 nova_compute[186018]: 2026-01-05 21:21:03.744 186022 DEBUG nova.virt.hardware [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 21:21:03 compute-0 nova_compute[186018]: 2026-01-05 21:21:03.744 186022 INFO nova.compute.claims [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Claim successful on node compute-0.ctlplane.example.com
Jan 05 21:21:04 compute-0 nova_compute[186018]: 2026-01-05 21:21:04.211 186022 DEBUG nova.compute.provider_tree [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:21:04 compute-0 nova_compute[186018]: 2026-01-05 21:21:04.390 186022 DEBUG nova.scheduler.client.report [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:21:04 compute-0 nova_compute[186018]: 2026-01-05 21:21:04.418 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.689s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:21:04 compute-0 nova_compute[186018]: 2026-01-05 21:21:04.419 186022 DEBUG nova.compute.manager [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 21:21:04 compute-0 nova_compute[186018]: 2026-01-05 21:21:04.467 186022 DEBUG nova.compute.manager [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Not allocating networking since 'none' was specified. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1948
Jan 05 21:21:04 compute-0 nova_compute[186018]: 2026-01-05 21:21:04.483 186022 INFO nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 21:21:04 compute-0 nova_compute[186018]: 2026-01-05 21:21:04.512 186022 DEBUG nova.compute.manager [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 21:21:04 compute-0 nova_compute[186018]: 2026-01-05 21:21:04.584 186022 DEBUG nova.compute.manager [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 21:21:04 compute-0 nova_compute[186018]: 2026-01-05 21:21:04.586 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 21:21:04 compute-0 nova_compute[186018]: 2026-01-05 21:21:04.587 186022 INFO nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Creating image(s)
Jan 05 21:21:04 compute-0 nova_compute[186018]: 2026-01-05 21:21:04.588 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "/var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:21:04 compute-0 nova_compute[186018]: 2026-01-05 21:21:04.588 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:21:04 compute-0 nova_compute[186018]: 2026-01-05 21:21:04.589 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:21:04 compute-0 nova_compute[186018]: 2026-01-05 21:21:04.590 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "4b3cb6d77cb774829604f60b9397307587f6e640" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:21:04 compute-0 nova_compute[186018]: 2026-01-05 21:21:04.591 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "4b3cb6d77cb774829604f60b9397307587f6e640" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:21:05 compute-0 nova_compute[186018]: 2026-01-05 21:21:05.219 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:05 compute-0 nova_compute[186018]: 2026-01-05 21:21:05.709 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:05 compute-0 nova_compute[186018]: 2026-01-05 21:21:05.810 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640.part --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:05 compute-0 nova_compute[186018]: 2026-01-05 21:21:05.811 186022 DEBUG nova.virt.images [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] 2e31ab9c-9bfa-47c7-a33b-345c4eac5342 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 05 21:21:05 compute-0 nova_compute[186018]: 2026-01-05 21:21:05.812 186022 DEBUG nova.privsep.utils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 05 21:21:05 compute-0 nova_compute[186018]: 2026-01-05 21:21:05.813 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640.part /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.001 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640.part /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640.converted" returned: 0 in 0.188s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.005 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.066 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640.converted --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.067 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "4b3cb6d77cb774829604f60b9397307587f6e640" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 1.476s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.081 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.141 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640 --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.142 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "4b3cb6d77cb774829604f60b9397307587f6e640" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.142 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "4b3cb6d77cb774829604f60b9397307587f6e640" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.154 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.211 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.213 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640,backing_fmt=raw /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.253 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640,backing_fmt=raw /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.255 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "4b3cb6d77cb774829604f60b9397307587f6e640" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.112s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.255 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.313 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.315 186022 DEBUG nova.virt.disk.api [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Checking if we can resize image /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.315 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.378 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.380 186022 DEBUG nova.virt.disk.api [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Cannot resize image /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.381 186022 DEBUG nova.objects.instance [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lazy-loading 'migration_context' on Instance uuid 9ca460fc-2a39-402b-8690-29aad98e5b5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.400 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "/var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.401 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.403 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "/var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.422 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.485 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.487 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "ephemeral_1_0706d66" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.488 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "ephemeral_1_0706d66" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.503 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.580 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.076s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.583 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.eph0 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.664 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/ephemeral_1_0706d66,backing_fmt=raw /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.eph0 1073741824" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.666 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "ephemeral_1_0706d66" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.667 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.732 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/ephemeral_1_0706d66 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.733 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.733 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Ensure instance console log exists: /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.734 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.735 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.735 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.737 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Start _get_guest_xml network_info=[] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.eph0': {'bus': 'virtio', 'dev': 'vdb', 'type': 'disk'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T21:20:49Z,direct_url=<?>,disk_format='qcow2',id=2e31ab9c-9bfa-47c7-a33b-345c4eac5342,min_disk=0,min_ram=0,name='fvt_testing_image',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T21:20:55Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'encrypted': False, 'encryption_format': None, 'image_id': '2e31ab9c-9bfa-47c7-a33b-345c4eac5342'}], 'ephemerals': [{'guest_format': None, 'device_name': '/dev/vdb', 'encryption_secret_uuid': None, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_options': None, 'size': 1, 'encrypted': False, 'encryption_format': None}], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.744 186022 WARNING nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.750 186022 DEBUG nova.virt.libvirt.host [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.751 186022 DEBUG nova.virt.libvirt.host [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.756 186022 DEBUG nova.virt.libvirt.host [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.757 186022 DEBUG nova.virt.libvirt.host [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.758 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.758 186022 DEBUG nova.virt.hardware [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T21:20:58Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=1,extra_specs={},flavorid='c05823eb-e11f-4200-bfa7-59d40f938393',id=2,is_public=True,memory_mb=512,name='fvt_testing_flavor',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='b874c39491a2377b8490f5f1e89761a4',container_format='bare',created_at=2026-01-05T21:20:49Z,direct_url=<?>,disk_format='qcow2',id=2e31ab9c-9bfa-47c7-a33b-345c4eac5342,min_disk=0,min_ram=0,name='fvt_testing_image',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=16300544,status='active',tags=<?>,updated_at=2026-01-05T21:20:55Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.759 186022 DEBUG nova.virt.hardware [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.759 186022 DEBUG nova.virt.hardware [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.759 186022 DEBUG nova.virt.hardware [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.760 186022 DEBUG nova.virt.hardware [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.760 186022 DEBUG nova.virt.hardware [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.760 186022 DEBUG nova.virt.hardware [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.761 186022 DEBUG nova.virt.hardware [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.761 186022 DEBUG nova.virt.hardware [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.761 186022 DEBUG nova.virt.hardware [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.762 186022 DEBUG nova.virt.hardware [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.768 186022 DEBUG nova.objects.instance [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lazy-loading 'pci_devices' on Instance uuid 9ca460fc-2a39-402b-8690-29aad98e5b5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:21:06 compute-0 nova_compute[186018]: 2026-01-05 21:21:06.947 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] End _get_guest_xml xml=<domain type="kvm">
Jan 05 21:21:06 compute-0 nova_compute[186018]:   <uuid>9ca460fc-2a39-402b-8690-29aad98e5b5e</uuid>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   <name>instance-00000005</name>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   <memory>524288</memory>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   <vcpu>1</vcpu>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   <metadata>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <nova:name>fvt_testing_server</nova:name>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <nova:creationTime>2026-01-05 21:21:06</nova:creationTime>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <nova:flavor name="fvt_testing_flavor">
Jan 05 21:21:06 compute-0 nova_compute[186018]:         <nova:memory>512</nova:memory>
Jan 05 21:21:06 compute-0 nova_compute[186018]:         <nova:disk>1</nova:disk>
Jan 05 21:21:06 compute-0 nova_compute[186018]:         <nova:swap>0</nova:swap>
Jan 05 21:21:06 compute-0 nova_compute[186018]:         <nova:ephemeral>1</nova:ephemeral>
Jan 05 21:21:06 compute-0 nova_compute[186018]:         <nova:vcpus>1</nova:vcpus>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       </nova:flavor>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <nova:owner>
Jan 05 21:21:06 compute-0 nova_compute[186018]:         <nova:user uuid="41f377b42540490198f271301cf5fe90">admin</nova:user>
Jan 05 21:21:06 compute-0 nova_compute[186018]:         <nova:project uuid="704814115a61471f9b45484171f67b5f">admin</nova:project>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       </nova:owner>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <nova:root type="image" uuid="2e31ab9c-9bfa-47c7-a33b-345c4eac5342"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <nova:ports/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     </nova:instance>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   </metadata>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   <sysinfo type="smbios">
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <system>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <entry name="manufacturer">RDO</entry>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <entry name="product">OpenStack Compute</entry>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <entry name="serial">9ca460fc-2a39-402b-8690-29aad98e5b5e</entry>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <entry name="uuid">9ca460fc-2a39-402b-8690-29aad98e5b5e</entry>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <entry name="family">Virtual Machine</entry>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     </system>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   </sysinfo>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   <os>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <boot dev="hd"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <smbios mode="sysinfo"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   </os>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   <features>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <acpi/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <apic/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <vmcoreinfo/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   </features>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   <clock offset="utc">
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <timer name="hpet" present="no"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   </clock>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   <cpu mode="host-model" match="exact">
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   </cpu>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   <devices>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <target dev="vda" bus="virtio"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.eph0"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <target dev="vdb" bus="virtio"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <disk type="file" device="cdrom">
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.config"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <target dev="sda" bus="sata"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <serial type="pty">
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <log file="/var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/console.log" append="off"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     </serial>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <video>
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     </video>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <input type="tablet" bus="usb"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <rng model="virtio">
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <backend model="random">/dev/urandom</backend>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     </rng>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <controller type="usb" index="0"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     <memballoon model="virtio">
Jan 05 21:21:06 compute-0 nova_compute[186018]:       <stats period="10"/>
Jan 05 21:21:06 compute-0 nova_compute[186018]:     </memballoon>
Jan 05 21:21:06 compute-0 nova_compute[186018]:   </devices>
Jan 05 21:21:06 compute-0 nova_compute[186018]: </domain>
Jan 05 21:21:06 compute-0 nova_compute[186018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 21:21:07 compute-0 nova_compute[186018]: 2026-01-05 21:21:07.189 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:21:07 compute-0 nova_compute[186018]: 2026-01-05 21:21:07.190 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No BDM found with device name vdb, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:21:07 compute-0 nova_compute[186018]: 2026-01-05 21:21:07.190 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:21:07 compute-0 nova_compute[186018]: 2026-01-05 21:21:07.191 186022 INFO nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Using config drive
Jan 05 21:21:07 compute-0 nova_compute[186018]: 2026-01-05 21:21:07.376 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.784 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.784 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.785 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.785 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c245520>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.793 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4f980272-c18f-4c66-9c04-8a07a7115de7', 'name': 'vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {'metering.server_group': 'a6371b97-6a0c-4b37-9443-eaf5410da4a4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.794 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 9ca460fc-2a39-402b-8690-29aad98e5b5e from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 05 21:21:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:07.795 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/9ca460fc-2a39-402b-8690-29aad98e5b5e -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f276ecb8e60cef1797549a0d2bcc21ef3546f9ad65f5da0e31c0a93bf2cbb910" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.009 186022 INFO nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Creating config drive at /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.config
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.016 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapu14pzo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.141 186022 DEBUG oslo_concurrency.processutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpapu14pzo" returned: 0 in 0.125s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:08 compute-0 systemd-machined[157312]: New machine qemu-5-instance-00000005.
Jan 05 21:21:08 compute-0 systemd[1]: Started Virtual Machine qemu-5-instance-00000005.
Jan 05 21:21:08 compute-0 podman[246533]: 2026-01-05 21:21:08.330164458 +0000 UTC m=+0.099379697 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.402 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1551 Content-Type: application/json Date: Mon, 05 Jan 2026 21:21:07 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-62457b98-1b68-4c72-a3fa-400c18568b15 x-openstack-request-id: req-62457b98-1b68-4c72-a3fa-400c18568b15 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.403 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "9ca460fc-2a39-402b-8690-29aad98e5b5e", "name": "fvt_testing_server", "status": "BUILD", "tenant_id": "704814115a61471f9b45484171f67b5f", "user_id": "41f377b42540490198f271301cf5fe90", "metadata": {}, "hostId": "cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424", "image": {"id": "2e31ab9c-9bfa-47c7-a33b-345c4eac5342", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/2e31ab9c-9bfa-47c7-a33b-345c4eac5342"}]}, "flavor": {"id": "c05823eb-e11f-4200-bfa7-59d40f938393", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/c05823eb-e11f-4200-bfa7-59d40f938393"}]}, "created": "2026-01-05T21:21:02Z", "updated": "2026-01-05T21:21:04Z", "addresses": {}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/9ca460fc-2a39-402b-8690-29aad98e5b5e"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/9ca460fc-2a39-402b-8690-29aad98e5b5e"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "", "key_name": null, "OS-SRV-USG:launched_at": null, "OS-SRV-USG:terminated_at": null, "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000005", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": "spawning", "OS-EXT-STS:vm_state": "building", "OS-EXT-STS:power_state": 0, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.403 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/9ca460fc-2a39-402b-8690-29aad98e5b5e used request id req-62457b98-1b68-4c72-a3fa-400c18568b15 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.404 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '9ca460fc-2a39-402b-8690-29aad98e5b5e', 'name': 'fvt_testing_server', 'flavor': {'id': 'c05823eb-e11f-4200-bfa7-59d40f938393', 'name': 'fvt_testing_flavor', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '2e31ab9c-9bfa-47c7-a33b-345c4eac5342'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000005', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'paused', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'paused', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:21:08 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.410 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f64de408-e6d1-4f7f-9f94-e20a4c83a87a', 'name': 'test_0', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.410 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.411 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.411 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.411 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.412 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.412 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.413 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.413 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.413 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.413 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.413 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:21:08.411360) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.414 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:21:08.413854) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.418 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.424 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.425 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.425 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.425 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.425 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.425 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.426 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.426 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.426 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.426 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:21:08.425986) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.426 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.427 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.427 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.427 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.427 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.427 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.427 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.packets volume: 22 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.428 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.428 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.428 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.428 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.428 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.429 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.429 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.429 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:21:08.427576) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.429 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.430 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.430 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.430 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:21:08.429305) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.430 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.430 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.430 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.430 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.bytes volume: 2356 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.431 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes volume: 2342 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.431 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:21:08.430680) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.431 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.431 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.432 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.432 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.432 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.432 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.432 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.432 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.433 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:21:08.432341) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.433 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.433 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.433 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.433 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.433 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.433 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.433 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.434 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-05T21:21:08.433744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.433 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.434 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.434 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.434 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.434 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.435 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.435 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.435 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:21:08.434974) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.435 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.435 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.436 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.436 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.436 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.436 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.436 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.436 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.437 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.437 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:21:08.436446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.437 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.437 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.437 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.437 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.437 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.437 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.438 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:21:08.437945) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.460 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.466 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.467 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.469 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.599 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648068.599044, 9ca460fc-2a39-402b-8690-29aad98e5b5e => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.599 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] VM Resumed (Lifecycle Event)
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.604 186022 DEBUG nova.compute.manager [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.605 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.608 186022 INFO nova.virt.libvirt.driver [-] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Instance spawned successfully.
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.609 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.623 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.623 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.624 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.648 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.648 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.649 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.649 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.650 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.650 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.650 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.650 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.650 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.650 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.651 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes.delta volume: 84 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.651 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.651 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.651 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:21:08.650523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.652 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.652 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.652 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.652 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.652 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.652 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: fvt_testing_server>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: fvt_testing_server>]
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.653 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-05T21:21:08.652570) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.653 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.653 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.653 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.653 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.653 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.654 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.654 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.654 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.655 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.655 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.655 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.655 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.655 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:21:08.653827) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.655 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.657 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:21:08.655513) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.676 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/memory.usage volume: 48.98046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.704 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.704 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 9ca460fc-2a39-402b-8690-29aad98e5b5e: ceilometer.compute.pollsters.NoVolumeException
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.724 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/memory.usage volume: 48.73046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.725 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.725 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.725 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.725 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.726 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.726 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.726 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.727 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.727 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.727 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:21:08.725787) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.727 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:21:08.727998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.728 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.728 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.728 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.729 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.729 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.729 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.730 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.730 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.730 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.731 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.732 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.732 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:21:08.731963) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.790 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.790 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.790 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.844 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.850 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.855 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.855 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.856 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.read.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.915 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.916 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.916 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.917 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.917 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.917 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.917 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.917 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.918 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.918 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:21:08.917966) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.919 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.919 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.919 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.919 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.919 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.latency volume: 461858933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:21:08.919638) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.920 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.latency volume: 95970893 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.920 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.latency volume: 69940491 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.920 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.920 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.921 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.read.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.921 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 488988741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.921 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 83667442 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.921 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 61020876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.922 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.922 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.922 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.922 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.922 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.923 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.923 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.923 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:21:08.923084) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.923 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.923 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.924 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.924 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.924 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.read.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.924 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.925 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.925 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.925 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.926 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.926 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.926 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.926 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.926 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.926 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.927 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.927 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.927 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:21:08.926573) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.927 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.928 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.928 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.928 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.928 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.929 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.929 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.929 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.929 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.930 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.930 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.930 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.930 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.bytes volume: 41828352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.930 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.931 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.931 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:21:08.930191) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.931 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.931 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.932 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.932 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.932 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.932 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.933 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.933 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.933 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.934 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.934 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.934 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.934 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/cpu volume: 37360000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.934 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:21:08.934316) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.934 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/cpu volume: 60000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.935 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/cpu volume: 44980000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.935 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.935 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.935 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.935 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.936 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.936 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.936 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.latency volume: 1129111979 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.936 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.latency volume: 12951810 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.936 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.937 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.937 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:21:08.936142) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.937 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.938 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 1391100422 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.938 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 11839143 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.938 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.939 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.939 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.939 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.939 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.939 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.940 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.940 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.940 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:21:08.939995) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.940 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.940 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.941 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.941 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.941 14 DEBUG ceilometer.compute.pollsters [-] 9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.941 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.942 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.942 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.943 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.943 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.943 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.943 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.943 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.943 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.943 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.943 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.943 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.943 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.944 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.945 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.945 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:21:08.945 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.991 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.992 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.992 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.992 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.993 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:21:08 compute-0 nova_compute[186018]: 2026-01-05 21:21:08.993 186022 DEBUG nova.virt.libvirt.driver [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:21:09 compute-0 nova_compute[186018]: 2026-01-05 21:21:09.304 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:21:09 compute-0 nova_compute[186018]: 2026-01-05 21:21:09.305 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648068.6036468, 9ca460fc-2a39-402b-8690-29aad98e5b5e => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:21:09 compute-0 nova_compute[186018]: 2026-01-05 21:21:09.305 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] VM Started (Lifecycle Event)
Jan 05 21:21:09 compute-0 nova_compute[186018]: 2026-01-05 21:21:09.381 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:21:09 compute-0 nova_compute[186018]: 2026-01-05 21:21:09.386 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:21:09 compute-0 nova_compute[186018]: 2026-01-05 21:21:09.520 186022 INFO nova.compute.manager [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Took 4.93 seconds to spawn the instance on the hypervisor.
Jan 05 21:21:09 compute-0 nova_compute[186018]: 2026-01-05 21:21:09.520 186022 DEBUG nova.compute.manager [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:21:09 compute-0 nova_compute[186018]: 2026-01-05 21:21:09.610 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:21:09 compute-0 nova_compute[186018]: 2026-01-05 21:21:09.899 186022 INFO nova.compute.manager [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Took 6.21 seconds to build instance.
Jan 05 21:21:10 compute-0 nova_compute[186018]: 2026-01-05 21:21:10.221 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:10 compute-0 nova_compute[186018]: 2026-01-05 21:21:10.243 186022 DEBUG oslo_concurrency.lockutils [None req-bbafe3e0-1495-48fe-9d6a-f242b9730242 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "9ca460fc-2a39-402b-8690-29aad98e5b5e" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.638s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:21:10 compute-0 nova_compute[186018]: 2026-01-05 21:21:10.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:21:10 compute-0 nova_compute[186018]: 2026-01-05 21:21:10.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:21:10 compute-0 nova_compute[186018]: 2026-01-05 21:21:10.948 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:21:10 compute-0 nova_compute[186018]: 2026-01-05 21:21:10.948 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:21:10 compute-0 nova_compute[186018]: 2026-01-05 21:21:10.949 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:21:12 compute-0 nova_compute[186018]: 2026-01-05 21:21:12.379 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:12 compute-0 nova_compute[186018]: 2026-01-05 21:21:12.406 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Updating instance_info_cache with network_info: [{"id": "6fba2106-2ecf-47b1-ba86-3ca344528342", "address": "fa:16:3e:71:37:b5", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fba2106-2e", "ovs_interfaceid": "6fba2106-2ecf-47b1-ba86-3ca344528342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:21:12 compute-0 nova_compute[186018]: 2026-01-05 21:21:12.421 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:21:12 compute-0 nova_compute[186018]: 2026-01-05 21:21:12.421 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:21:13 compute-0 nova_compute[186018]: 2026-01-05 21:21:13.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:21:13 compute-0 podman[246595]: 2026-01-05 21:21:13.721302952 +0000 UTC m=+0.071471364 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 05 21:21:15 compute-0 nova_compute[186018]: 2026-01-05 21:21:15.223 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:15 compute-0 nova_compute[186018]: 2026-01-05 21:21:15.456 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:21:15 compute-0 nova_compute[186018]: 2026-01-05 21:21:15.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:21:15 compute-0 nova_compute[186018]: 2026-01-05 21:21:15.460 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 21:21:15 compute-0 podman[246616]: 2026-01-05 21:21:15.741922016 +0000 UTC m=+0.086798797 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, io.buildah.version=1.29.0, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release-0.7.12=, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.openshift.expose-services=, com.redhat.component=ubi9-container, name=ubi9, maintainer=Red Hat, Inc.)
Jan 05 21:21:17 compute-0 nova_compute[186018]: 2026-01-05 21:21:17.384 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:17 compute-0 nova_compute[186018]: 2026-01-05 21:21:17.607 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:21:17 compute-0 nova_compute[186018]: 2026-01-05 21:21:17.608 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:21:17 compute-0 nova_compute[186018]: 2026-01-05 21:21:17.703 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:21:17 compute-0 nova_compute[186018]: 2026-01-05 21:21:17.703 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:21:17 compute-0 nova_compute[186018]: 2026-01-05 21:21:17.703 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:21:17 compute-0 nova_compute[186018]: 2026-01-05 21:21:17.703 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:21:17 compute-0 podman[246638]: 2026-01-05 21:21:17.787984227 +0000 UTC m=+0.147144110 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251224)
Jan 05 21:21:17 compute-0 nova_compute[186018]: 2026-01-05 21:21:17.802 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:17 compute-0 nova_compute[186018]: 2026-01-05 21:21:17.882 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:17 compute-0 nova_compute[186018]: 2026-01-05 21:21:17.884 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:17 compute-0 nova_compute[186018]: 2026-01-05 21:21:17.954 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:17 compute-0 nova_compute[186018]: 2026-01-05 21:21:17.957 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.018 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.020 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.081 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.088 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.158 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.160 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.213 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk --force-share --output=json" returned: 0 in 0.053s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.215 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.286 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.289 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.361 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e/disk.eph0 --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.378 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.444 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.446 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.511 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.512 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.577 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.579 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:21:18 compute-0 nova_compute[186018]: 2026-01-05 21:21:18.641 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:21:19 compute-0 nova_compute[186018]: 2026-01-05 21:21:19.029 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:21:19 compute-0 nova_compute[186018]: 2026-01-05 21:21:19.031 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4800MB free_disk=72.37244033813477GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:21:19 compute-0 nova_compute[186018]: 2026-01-05 21:21:19.031 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:21:19 compute-0 nova_compute[186018]: 2026-01-05 21:21:19.032 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:21:19 compute-0 nova_compute[186018]: 2026-01-05 21:21:19.365 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:21:19 compute-0 nova_compute[186018]: 2026-01-05 21:21:19.366 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 4f980272-c18f-4c66-9c04-8a07a7115de7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:21:19 compute-0 nova_compute[186018]: 2026-01-05 21:21:19.366 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 9ca460fc-2a39-402b-8690-29aad98e5b5e actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:21:19 compute-0 nova_compute[186018]: 2026-01-05 21:21:19.367 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:21:19 compute-0 nova_compute[186018]: 2026-01-05 21:21:19.368 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=2048MB phys_disk=79GB used_disk=6GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:21:19 compute-0 nova_compute[186018]: 2026-01-05 21:21:19.586 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:21:19 compute-0 nova_compute[186018]: 2026-01-05 21:21:19.627 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:21:19 compute-0 nova_compute[186018]: 2026-01-05 21:21:19.724 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:21:19 compute-0 nova_compute[186018]: 2026-01-05 21:21:19.725 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.693s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:21:20 compute-0 nova_compute[186018]: 2026-01-05 21:21:20.228 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:20 compute-0 nova_compute[186018]: 2026-01-05 21:21:20.577 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:21:20 compute-0 nova_compute[186018]: 2026-01-05 21:21:20.579 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:21:21 compute-0 nova_compute[186018]: 2026-01-05 21:21:21.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:21:21 compute-0 nova_compute[186018]: 2026-01-05 21:21:21.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:21:22 compute-0 nova_compute[186018]: 2026-01-05 21:21:22.387 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:25 compute-0 nova_compute[186018]: 2026-01-05 21:21:25.231 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:25 compute-0 nova_compute[186018]: 2026-01-05 21:21:25.475 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:21:25 compute-0 nova_compute[186018]: 2026-01-05 21:21:25.476 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 21:21:25 compute-0 nova_compute[186018]: 2026-01-05 21:21:25.512 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 05 21:21:25 compute-0 podman[246696]: 2026-01-05 21:21:25.727551952 +0000 UTC m=+0.075751668 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., distribution-scope=public, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, architecture=x86_64, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, config_id=openstack_network_exporter, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.expose-services=, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 05 21:21:25 compute-0 podman[246695]: 2026-01-05 21:21:25.765673903 +0000 UTC m=+0.117862304 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 05 21:21:27 compute-0 nova_compute[186018]: 2026-01-05 21:21:27.388 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:28 compute-0 nova_compute[186018]: 2026-01-05 21:21:28.132 186022 DEBUG oslo_concurrency.lockutils [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "9ca460fc-2a39-402b-8690-29aad98e5b5e" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:21:28 compute-0 nova_compute[186018]: 2026-01-05 21:21:28.133 186022 DEBUG oslo_concurrency.lockutils [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "9ca460fc-2a39-402b-8690-29aad98e5b5e" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:21:28 compute-0 nova_compute[186018]: 2026-01-05 21:21:28.133 186022 DEBUG oslo_concurrency.lockutils [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "9ca460fc-2a39-402b-8690-29aad98e5b5e-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:21:28 compute-0 nova_compute[186018]: 2026-01-05 21:21:28.133 186022 DEBUG oslo_concurrency.lockutils [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "9ca460fc-2a39-402b-8690-29aad98e5b5e-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:21:28 compute-0 nova_compute[186018]: 2026-01-05 21:21:28.134 186022 DEBUG oslo_concurrency.lockutils [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "9ca460fc-2a39-402b-8690-29aad98e5b5e-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:21:28 compute-0 nova_compute[186018]: 2026-01-05 21:21:28.135 186022 INFO nova.compute.manager [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Terminating instance
Jan 05 21:21:28 compute-0 nova_compute[186018]: 2026-01-05 21:21:28.136 186022 DEBUG oslo_concurrency.lockutils [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "refresh_cache-9ca460fc-2a39-402b-8690-29aad98e5b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:21:28 compute-0 nova_compute[186018]: 2026-01-05 21:21:28.136 186022 DEBUG oslo_concurrency.lockutils [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquired lock "refresh_cache-9ca460fc-2a39-402b-8690-29aad98e5b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:21:28 compute-0 nova_compute[186018]: 2026-01-05 21:21:28.137 186022 DEBUG nova.network.neutron [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 21:21:28 compute-0 nova_compute[186018]: 2026-01-05 21:21:28.947 186022 DEBUG nova.network.neutron [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 21:21:29 compute-0 nova_compute[186018]: 2026-01-05 21:21:29.314 186022 DEBUG nova.network.neutron [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:21:29 compute-0 nova_compute[186018]: 2026-01-05 21:21:29.398 186022 DEBUG oslo_concurrency.lockutils [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Releasing lock "refresh_cache-9ca460fc-2a39-402b-8690-29aad98e5b5e" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:21:29 compute-0 nova_compute[186018]: 2026-01-05 21:21:29.399 186022 DEBUG nova.compute.manager [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 21:21:29 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Deactivated successfully.
Jan 05 21:21:29 compute-0 systemd[1]: machine-qemu\x2d5\x2dinstance\x2d00000005.scope: Consumed 21.475s CPU time.
Jan 05 21:21:29 compute-0 systemd-machined[157312]: Machine qemu-5-instance-00000005 terminated.
Jan 05 21:21:29 compute-0 nova_compute[186018]: 2026-01-05 21:21:29.672 186022 INFO nova.virt.libvirt.driver [-] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Instance destroyed successfully.
Jan 05 21:21:29 compute-0 nova_compute[186018]: 2026-01-05 21:21:29.673 186022 DEBUG nova.objects.instance [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lazy-loading 'resources' on Instance uuid 9ca460fc-2a39-402b-8690-29aad98e5b5e obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:21:29 compute-0 nova_compute[186018]: 2026-01-05 21:21:29.735 186022 INFO nova.virt.libvirt.driver [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Deleting instance files /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e_del
Jan 05 21:21:29 compute-0 nova_compute[186018]: 2026-01-05 21:21:29.736 186022 INFO nova.virt.libvirt.driver [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Deletion of /var/lib/nova/instances/9ca460fc-2a39-402b-8690-29aad98e5b5e_del complete
Jan 05 21:21:29 compute-0 podman[202426]: time="2026-01-05T21:21:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:21:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:21:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:21:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:21:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4369 "" "Go-http-client/1.1"
Jan 05 21:21:29 compute-0 nova_compute[186018]: 2026-01-05 21:21:29.850 186022 INFO nova.compute.manager [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Took 0.45 seconds to destroy the instance on the hypervisor.
Jan 05 21:21:29 compute-0 nova_compute[186018]: 2026-01-05 21:21:29.850 186022 DEBUG oslo.service.loopingcall [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 21:21:29 compute-0 nova_compute[186018]: 2026-01-05 21:21:29.850 186022 DEBUG nova.compute.manager [-] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 21:21:29 compute-0 nova_compute[186018]: 2026-01-05 21:21:29.850 186022 DEBUG nova.network.neutron [-] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 21:21:29 compute-0 nova_compute[186018]: 2026-01-05 21:21:29.949 186022 DEBUG nova.network.neutron [-] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 21:21:29 compute-0 nova_compute[186018]: 2026-01-05 21:21:29.963 186022 DEBUG nova.network.neutron [-] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:21:30 compute-0 nova_compute[186018]: 2026-01-05 21:21:30.021 186022 INFO nova.compute.manager [-] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Took 0.17 seconds to deallocate network for instance.
Jan 05 21:21:30 compute-0 nova_compute[186018]: 2026-01-05 21:21:30.189 186022 DEBUG oslo_concurrency.lockutils [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:21:30 compute-0 nova_compute[186018]: 2026-01-05 21:21:30.190 186022 DEBUG oslo_concurrency.lockutils [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:21:30 compute-0 nova_compute[186018]: 2026-01-05 21:21:30.233 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:30 compute-0 nova_compute[186018]: 2026-01-05 21:21:30.313 186022 DEBUG nova.compute.provider_tree [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:21:30 compute-0 nova_compute[186018]: 2026-01-05 21:21:30.538 186022 DEBUG nova.scheduler.client.report [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:21:30 compute-0 nova_compute[186018]: 2026-01-05 21:21:30.579 186022 DEBUG oslo_concurrency.lockutils [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.390s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:21:30 compute-0 nova_compute[186018]: 2026-01-05 21:21:30.655 186022 INFO nova.scheduler.client.report [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Deleted allocations for instance 9ca460fc-2a39-402b-8690-29aad98e5b5e
Jan 05 21:21:30 compute-0 nova_compute[186018]: 2026-01-05 21:21:30.904 186022 DEBUG oslo_concurrency.lockutils [None req-a55c11d6-c4f7-4502-b3ab-c710fccd0986 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "9ca460fc-2a39-402b-8690-29aad98e5b5e" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.771s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:21:31 compute-0 openstack_network_exporter[205720]: ERROR   21:21:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:21:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:21:31 compute-0 openstack_network_exporter[205720]: ERROR   21:21:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:21:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:21:31 compute-0 podman[246756]: 2026-01-05 21:21:31.717526174 +0000 UTC m=+0.069522535 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:21:31 compute-0 podman[246757]: 2026-01-05 21:21:31.755150311 +0000 UTC m=+0.097483369 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:21:32 compute-0 nova_compute[186018]: 2026-01-05 21:21:32.390 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:35 compute-0 nova_compute[186018]: 2026-01-05 21:21:35.236 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:37 compute-0 nova_compute[186018]: 2026-01-05 21:21:37.392 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:38 compute-0 podman[246797]: 2026-01-05 21:21:38.736419476 +0000 UTC m=+0.085304128 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:21:40 compute-0 nova_compute[186018]: 2026-01-05 21:21:40.240 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:42 compute-0 nova_compute[186018]: 2026-01-05 21:21:42.396 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:21:42.857 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:21:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:21:42.857 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:21:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:21:42.858 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:21:44 compute-0 nova_compute[186018]: 2026-01-05 21:21:44.669 186022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767648089.6673396, 9ca460fc-2a39-402b-8690-29aad98e5b5e => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:21:44 compute-0 nova_compute[186018]: 2026-01-05 21:21:44.669 186022 INFO nova.compute.manager [-] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] VM Stopped (Lifecycle Event)
Jan 05 21:21:44 compute-0 podman[246821]: 2026-01-05 21:21:44.739529516 +0000 UTC m=+0.096634117 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202)
Jan 05 21:21:44 compute-0 nova_compute[186018]: 2026-01-05 21:21:44.912 186022 DEBUG nova.compute.manager [None req-f1e8780a-1ac2-43cc-99ba-d7dca1937f17 - - - - - -] [instance: 9ca460fc-2a39-402b-8690-29aad98e5b5e] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:21:45 compute-0 nova_compute[186018]: 2026-01-05 21:21:45.244 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:45 compute-0 sshd-session[246143]: Received disconnect from 38.102.83.164 port 47438:11: disconnected by user
Jan 05 21:21:45 compute-0 sshd-session[246143]: Disconnected from user zuul 38.102.83.164 port 47438
Jan 05 21:21:45 compute-0 sshd-session[246140]: pam_unix(sshd:session): session closed for user zuul
Jan 05 21:21:45 compute-0 systemd[1]: session-29.scope: Deactivated successfully.
Jan 05 21:21:45 compute-0 systemd[1]: session-29.scope: Consumed 1.009s CPU time.
Jan 05 21:21:45 compute-0 systemd-logind[788]: Session 29 logged out. Waiting for processes to exit.
Jan 05 21:21:45 compute-0 systemd-logind[788]: Removed session 29.
Jan 05 21:21:46 compute-0 podman[246841]: 2026-01-05 21:21:46.723639762 +0000 UTC m=+0.081371386 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, release-0.7.12=, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, distribution-scope=public, maintainer=Red Hat, Inc.)
Jan 05 21:21:47 compute-0 nova_compute[186018]: 2026-01-05 21:21:47.399 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:48 compute-0 podman[246858]: 2026-01-05 21:21:48.794410273 +0000 UTC m=+0.138974008 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute)
Jan 05 21:21:50 compute-0 nova_compute[186018]: 2026-01-05 21:21:50.247 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:52 compute-0 nova_compute[186018]: 2026-01-05 21:21:52.401 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:55 compute-0 nova_compute[186018]: 2026-01-05 21:21:55.250 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:56 compute-0 podman[246879]: 2026-01-05 21:21:56.764529758 +0000 UTC m=+0.112373640 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, distribution-scope=public, name=ubi9-minimal, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, version=9.6, io.openshift.expose-services=, container_name=openstack_network_exporter)
Jan 05 21:21:56 compute-0 podman[246878]: 2026-01-05 21:21:56.784188134 +0000 UTC m=+0.123338598 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 21:21:57 compute-0 nova_compute[186018]: 2026-01-05 21:21:57.405 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:21:59 compute-0 podman[202426]: time="2026-01-05T21:21:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:21:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:21:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:21:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:21:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4366 "" "Go-http-client/1.1"
Jan 05 21:22:00 compute-0 nova_compute[186018]: 2026-01-05 21:22:00.253 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:01 compute-0 openstack_network_exporter[205720]: ERROR   21:22:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:22:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:22:01 compute-0 openstack_network_exporter[205720]: ERROR   21:22:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:22:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:22:02 compute-0 nova_compute[186018]: 2026-01-05 21:22:02.408 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:02 compute-0 podman[246924]: 2026-01-05 21:22:02.748111904 +0000 UTC m=+0.089186801 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent)
Jan 05 21:22:02 compute-0 podman[246925]: 2026-01-05 21:22:02.790206689 +0000 UTC m=+0.112907684 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:22:05 compute-0 nova_compute[186018]: 2026-01-05 21:22:05.256 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:07 compute-0 nova_compute[186018]: 2026-01-05 21:22:07.411 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:08 compute-0 sshd-session[246963]: Accepted publickey for zuul from 38.102.83.164 port 42408 ssh2: RSA SHA256:mXJcJI31MVGiY6AzcXJ/p7r5TKU3Hv0WPE1JL6YqbII
Jan 05 21:22:08 compute-0 systemd-logind[788]: New session 30 of user zuul.
Jan 05 21:22:08 compute-0 systemd[1]: Started Session 30 of User zuul.
Jan 05 21:22:08 compute-0 sshd-session[246963]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 21:22:09 compute-0 sudo[247155]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-zjvwhlpfhfqcwfmivsqyufqvgwvbtega ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767648128.6692595-60488-95461184722893/AnsiballZ_command.py'
Jan 05 21:22:09 compute-0 sudo[247155]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:22:09 compute-0 nova_compute[186018]: 2026-01-05 21:22:09.498 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:22:09 compute-0 podman[247115]: 2026-01-05 21:22:09.499652249 +0000 UTC m=+0.108596390 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:22:09 compute-0 nova_compute[186018]: 2026-01-05 21:22:09.500 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:22:09 compute-0 python3[247165]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep node_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 21:22:09 compute-0 sudo[247155]: pam_unix(sudo:session): session closed for user root
Jan 05 21:22:10 compute-0 nova_compute[186018]: 2026-01-05 21:22:10.260 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:12 compute-0 nova_compute[186018]: 2026-01-05 21:22:12.415 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:12 compute-0 nova_compute[186018]: 2026-01-05 21:22:12.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:22:12 compute-0 nova_compute[186018]: 2026-01-05 21:22:12.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:22:12 compute-0 nova_compute[186018]: 2026-01-05 21:22:12.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:22:12 compute-0 nova_compute[186018]: 2026-01-05 21:22:12.993 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:22:12 compute-0 nova_compute[186018]: 2026-01-05 21:22:12.994 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:22:12 compute-0 nova_compute[186018]: 2026-01-05 21:22:12.994 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:22:12 compute-0 nova_compute[186018]: 2026-01-05 21:22:12.995 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f64de408-e6d1-4f7f-9f94-e20a4c83a87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:22:14 compute-0 nova_compute[186018]: 2026-01-05 21:22:14.705 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updating instance_info_cache with network_info: [{"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:22:15 compute-0 nova_compute[186018]: 2026-01-05 21:22:15.164 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:22:15 compute-0 nova_compute[186018]: 2026-01-05 21:22:15.165 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:22:15 compute-0 nova_compute[186018]: 2026-01-05 21:22:15.166 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:22:15 compute-0 nova_compute[186018]: 2026-01-05 21:22:15.263 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:15 compute-0 podman[247205]: 2026-01-05 21:22:15.723489427 +0000 UTC m=+0.070742908 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi)
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.496 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.497 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.497 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.498 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.600 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.664 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.666 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.732 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.733 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.801 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.802 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.860 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.876 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.939 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:22:16 compute-0 nova_compute[186018]: 2026-01-05 21:22:16.940 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:22:16 compute-0 podman[247385]: 2026-01-05 21:22:16.996299547 +0000 UTC m=+0.094963993 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, release-0.7.12=, vcs-type=git, container_name=kepler, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, release=1214.1726694543, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, vendor=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f)
Jan 05 21:22:17 compute-0 sudo[247431]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-ngiqtcavegrjrkaxxjknjhmipjbanvst ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767648136.3920887-60650-47029579677730/AnsiballZ_command.py'
Jan 05 21:22:17 compute-0 sudo[247431]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.010 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.011 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.085 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.086 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:22:17 compute-0 python3[247436]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep podman_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.153 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:22:17 compute-0 sudo[247431]: pam_unix(sudo:session): session closed for user root
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.417 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.511 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.512 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4935MB free_disk=72.37335968017578GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.512 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.513 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.801 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.801 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 4f980272-c18f-4c66-9c04-8a07a7115de7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.802 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.802 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.826 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing inventories for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.847 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating ProviderTree inventory for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.848 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating inventory in ProviderTree for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.868 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing aggregate associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.894 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing trait associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_NODE,HW_CPU_X86_BMI,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.966 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:22:17 compute-0 nova_compute[186018]: 2026-01-05 21:22:17.981 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:22:18 compute-0 nova_compute[186018]: 2026-01-05 21:22:18.000 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:22:18 compute-0 nova_compute[186018]: 2026-01-05 21:22:18.000 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.487s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:22:19 compute-0 nova_compute[186018]: 2026-01-05 21:22:19.001 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:22:19 compute-0 nova_compute[186018]: 2026-01-05 21:22:19.001 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:22:19 compute-0 podman[247482]: 2026-01-05 21:22:19.765434872 +0000 UTC m=+0.107015860 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute)
Jan 05 21:22:20 compute-0 nova_compute[186018]: 2026-01-05 21:22:20.267 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:20 compute-0 nova_compute[186018]: 2026-01-05 21:22:20.458 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:22:20 compute-0 nova_compute[186018]: 2026-01-05 21:22:20.485 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:22:22 compute-0 nova_compute[186018]: 2026-01-05 21:22:22.419 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:23 compute-0 nova_compute[186018]: 2026-01-05 21:22:23.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:22:25 compute-0 nova_compute[186018]: 2026-01-05 21:22:25.271 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:27 compute-0 nova_compute[186018]: 2026-01-05 21:22:27.421 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:27 compute-0 sudo[247707]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-yqmpeyaeipnugbtevprhwrqdiiwzdpxy ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767648146.6945407-60803-78664733280533/AnsiballZ_command.py'
Jan 05 21:22:27 compute-0 sudo[247707]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:22:27 compute-0 podman[247651]: 2026-01-05 21:22:27.502437164 +0000 UTC m=+0.101964537 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., version=9.6, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, vcs-type=git, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., io.buildah.version=1.33.7, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:22:27 compute-0 podman[247650]: 2026-01-05 21:22:27.565396926 +0000 UTC m=+0.171435970 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ovn_controller, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 21:22:27 compute-0 python3[247716]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep kepler
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 21:22:27 compute-0 sudo[247707]: pam_unix(sudo:session): session closed for user root
Jan 05 21:22:29 compute-0 podman[202426]: time="2026-01-05T21:22:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:22:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:22:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:22:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:22:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4367 "" "Go-http-client/1.1"
Jan 05 21:22:30 compute-0 nova_compute[186018]: 2026-01-05 21:22:30.273 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:31 compute-0 openstack_network_exporter[205720]: ERROR   21:22:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:22:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:22:31 compute-0 openstack_network_exporter[205720]: ERROR   21:22:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:22:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:22:32 compute-0 nova_compute[186018]: 2026-01-05 21:22:32.425 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:33 compute-0 podman[247762]: 2026-01-05 21:22:33.756447956 +0000 UTC m=+0.098869486 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 05 21:22:33 compute-0 podman[247763]: 2026-01-05 21:22:33.786910565 +0000 UTC m=+0.110385037 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:22:35 compute-0 nova_compute[186018]: 2026-01-05 21:22:35.278 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:37 compute-0 nova_compute[186018]: 2026-01-05 21:22:37.429 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:39 compute-0 podman[247804]: 2026-01-05 21:22:39.742260249 +0000 UTC m=+0.086254845 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:22:40 compute-0 nova_compute[186018]: 2026-01-05 21:22:40.280 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:42 compute-0 sudo[248000]:     zuul : TTY=pts/1 ; PWD=/home/zuul ; USER=root ; COMMAND=/bin/sh -c 'echo BECOME-SUCCESS-xgbunzycvysxehuhjacihlnwfeheodxv ; KUBECONFIG=/home/zuul/.crc/machines/crc/kubeconfig PATH=/home/zuul/.crc/bin:/home/zuul/.crc/bin/oc:/home/zuul/bin:/home/zuul/.local/bin:/home/zuul/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin /usr/bin/python3 /home/zuul/.ansible/tmp/ansible-tmp-1767648161.3992214-61022-195130960096391/AnsiballZ_command.py'
Jan 05 21:22:42 compute-0 sudo[248000]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:22:42 compute-0 python3[248002]: ansible-ansible.legacy.command Invoked with _raw_params=podman ps -a --format "{{.Names}} {{.Status}}" | grep openstack_network_exporter
                                            _uses_shell=True stdin_add_newline=True strip_empty_ends=True argv=None chdir=None executable=None creates=None removes=None stdin=None
Jan 05 21:22:42 compute-0 nova_compute[186018]: 2026-01-05 21:22:42.431 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:42 compute-0 sudo[248000]: pam_unix(sudo:session): session closed for user root
Jan 05 21:22:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:22:42.859 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:22:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:22:42.860 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:22:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:22:42.861 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:22:45 compute-0 nova_compute[186018]: 2026-01-05 21:22:45.283 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:46 compute-0 podman[248042]: 2026-01-05 21:22:46.772018095 +0000 UTC m=+0.117651277 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 05 21:22:47 compute-0 nova_compute[186018]: 2026-01-05 21:22:47.435 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:47 compute-0 podman[248062]: 2026-01-05 21:22:47.803737079 +0000 UTC m=+0.143311952 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, container_name=kepler, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.4, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, release-0.7.12=, config_id=kepler)
Jan 05 21:22:50 compute-0 nova_compute[186018]: 2026-01-05 21:22:50.287 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:50 compute-0 podman[248083]: 2026-01-05 21:22:50.804762988 +0000 UTC m=+0.149538165 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:22:52 compute-0 nova_compute[186018]: 2026-01-05 21:22:52.439 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:55 compute-0 nova_compute[186018]: 2026-01-05 21:22:55.290 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:57 compute-0 nova_compute[186018]: 2026-01-05 21:22:57.443 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:22:57 compute-0 podman[248104]: 2026-01-05 21:22:57.732347334 +0000 UTC m=+0.079175349 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.expose-services=, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vendor=Red Hat, Inc., version=9.6, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1755695350, architecture=x86_64, config_id=openstack_network_exporter)
Jan 05 21:22:57 compute-0 podman[248103]: 2026-01-05 21:22:57.800446131 +0000 UTC m=+0.148839337 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 05 21:22:59 compute-0 podman[202426]: time="2026-01-05T21:22:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:22:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:22:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:22:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:22:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4361 "" "Go-http-client/1.1"
Jan 05 21:23:00 compute-0 nova_compute[186018]: 2026-01-05 21:23:00.293 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:01 compute-0 openstack_network_exporter[205720]: ERROR   21:23:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:23:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:23:01 compute-0 openstack_network_exporter[205720]: ERROR   21:23:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:23:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:23:02 compute-0 nova_compute[186018]: 2026-01-05 21:23:02.447 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:04 compute-0 podman[248148]: 2026-01-05 21:23:04.724035505 +0000 UTC m=+0.071871417 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 05 21:23:04 compute-0 podman[248149]: 2026-01-05 21:23:04.773189685 +0000 UTC m=+0.105978022 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:23:05 compute-0 nova_compute[186018]: 2026-01-05 21:23:05.296 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:07 compute-0 nova_compute[186018]: 2026-01-05 21:23:07.451 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.785 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.786 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.794 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '4f980272-c18f-4c66-9c04-8a07a7115de7', 'name': 'vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000004', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {'metering.server_group': 'a6371b97-6a0c-4b37-9443-eaf5410da4a4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.798 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'f64de408-e6d1-4f7f-9f94-e20a4c83a87a', 'name': 'test_0', 'flavor': {'id': 'd9d5992a-1c00-4233-a43d-71321ed82446', 'name': 'm1.small', 'vcpus': 1, 'ram': 512, 'disk': 1, 'ephemeral': 1, 'swap': 0}, 'image': {'id': '31cf9c34-2e56-49e9-bb98-955ac3cf9185'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000001', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '704814115a61471f9b45484171f67b5f', 'user_id': '41f377b42540490198f271301cf5fe90', 'hostId': 'cfde697f383bebd47763f1ef3a53e06ee3bc7745ed7bf84914295424', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.799 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.799 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.799 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.799 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.800 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.801 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.801 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.801 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.801 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:23:07.799878) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.801 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.802 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:23:07.801892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.808 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.814 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets volume: 25 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.815 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.815 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.815 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.816 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.816 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.816 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.816 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.817 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.817 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:23:07.816301) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.817 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.817 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.818 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.818 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.818 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.818 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.818 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.packets volume: 23 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.818 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:23:07.818452) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.819 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets volume: 24 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.819 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.819 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.820 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.820 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.820 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.820 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.821 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.821 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.821 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.822 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.822 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:23:07.820755) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.822 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.822 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.822 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.bytes volume: 2426 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:23:07.822522) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.823 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes volume: 2412 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.823 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.823 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.824 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.824 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.824 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.824 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.824 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.825 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.825 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.825 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.826 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.826 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.826 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.826 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.826 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.827 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.827 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.827 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:23:07.824650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.827 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.827 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:23:07.827114) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.828 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.828 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.828 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.828 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.828 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.829 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.829 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:23:07.828909) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.830 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.830 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.830 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.830 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.831 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.831 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.832 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:23:07.831320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.856 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.allocation volume: 21635072 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.856 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.857 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.allocation volume: 585728 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.884 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 21307392 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.885 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 1253376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.885 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.allocation volume: 487424 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.886 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.886 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.886 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.886 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.886 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.887 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.887 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.887 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.888 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.888 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.888 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.888 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.889 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.889 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.889 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.889 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.890 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.890 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.890 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:23:07.887046) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.890 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.890 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:23:07.889472) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.891 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.891 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.891 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.891 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.891 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:23:07.891370) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.918 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/memory.usage volume: 48.98046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.945 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/memory.usage volume: 48.73046875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.946 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.946 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.946 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.946 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.947 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.947 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.947 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.947 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.948 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.948 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:23:07.947143) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.948 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.948 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.948 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.948 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.948 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.948 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.949 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.capacity volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.949 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.949 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.949 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:23:07.948613) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.949 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.capacity volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.950 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.950 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.950 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.950 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.950 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:07.951 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:23:07.950764) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.018 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.019 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.019 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.bytes volume: 385378 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.098 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 23308800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.099 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 3227648 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.099 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.bytes volume: 274786 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.100 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.100 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.100 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.100 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.101 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.101 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.101 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/network.incoming.bytes volume: 1654 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.101 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/network.incoming.bytes volume: 2304 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.101 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:23:08.101107) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.102 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.102 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.102 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.102 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.102 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.102 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.102 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.latency volume: 461858933 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.103 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.latency volume: 95970893 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.103 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.latency volume: 69940491 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.103 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 488988741 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.104 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 83667442 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.104 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.latency volume: 61020876 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.104 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.104 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.105 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.105 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.105 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.105 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.105 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.105 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.read.requests volume: 124 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.106 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 840 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.106 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 173 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.106 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.read.requests volume: 109 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.107 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.107 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.107 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.107 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.107 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.107 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:23:08.102817) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.107 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.usage volume: 21364736 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:23:08.105163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.108 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:23:08.107810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.108 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.108 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.usage volume: 583680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.109 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 21233664 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.109 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 393216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.109 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.usage volume: 485376 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.110 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.110 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.110 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.110 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.110 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.111 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.111 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.bytes volume: 41828352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.111 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.111 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.112 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 41779200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.112 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 512 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.112 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.113 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.113 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.113 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.113 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:23:08.111028) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.113 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.113 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.114 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.114 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/cpu volume: 38940000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.114 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/cpu volume: 46570000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.114 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.115 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.115 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.115 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.115 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.115 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:23:08.113961) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.115 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.115 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:23:08.115648) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.116 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.latency volume: 1129111979 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.116 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.latency volume: 12951810 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.116 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.116 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 1391100422 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.116 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 11839143 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.117 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.117 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.117 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.117 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.118 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.118 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.118 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.118 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.requests volume: 232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.118 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.118 14 DEBUG ceilometer.compute.pollsters [-] 4f980272-c18f-4c66-9c04-8a07a7115de7/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.119 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 233 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.119 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.119 14 DEBUG ceilometer.compute.pollsters [-] f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.120 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.121 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:23:08.118425) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.121 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.122 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.123 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.124 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.125 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:23:08.126 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:23:10 compute-0 nova_compute[186018]: 2026-01-05 21:23:10.298 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:10 compute-0 nova_compute[186018]: 2026-01-05 21:23:10.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:23:10 compute-0 nova_compute[186018]: 2026-01-05 21:23:10.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:23:10 compute-0 podman[248190]: 2026-01-05 21:23:10.765464321 +0000 UTC m=+0.118924402 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:23:11 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 05 21:23:12 compute-0 nova_compute[186018]: 2026-01-05 21:23:12.453 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:12 compute-0 nova_compute[186018]: 2026-01-05 21:23:12.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:23:12 compute-0 nova_compute[186018]: 2026-01-05 21:23:12.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:23:12 compute-0 nova_compute[186018]: 2026-01-05 21:23:12.989 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:23:12 compute-0 nova_compute[186018]: 2026-01-05 21:23:12.990 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:23:12 compute-0 nova_compute[186018]: 2026-01-05 21:23:12.990 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:23:14 compute-0 nova_compute[186018]: 2026-01-05 21:23:14.205 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Updating instance_info_cache with network_info: [{"id": "6fba2106-2ecf-47b1-ba86-3ca344528342", "address": "fa:16:3e:71:37:b5", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fba2106-2e", "ovs_interfaceid": "6fba2106-2ecf-47b1-ba86-3ca344528342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:23:15 compute-0 nova_compute[186018]: 2026-01-05 21:23:15.302 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:15 compute-0 nova_compute[186018]: 2026-01-05 21:23:15.392 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:23:15 compute-0 nova_compute[186018]: 2026-01-05 21:23:15.393 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:23:15 compute-0 nova_compute[186018]: 2026-01-05 21:23:15.394 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.457 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.493 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.493 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.493 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.493 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.586 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.652 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.653 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.720 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.721 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:23:17 compute-0 podman[248216]: 2026-01-05 21:23:17.724320509 +0000 UTC m=+0.074143867 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.780 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.781 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.860 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.871 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.934 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:23:17 compute-0 nova_compute[186018]: 2026-01-05 21:23:17.935 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.015 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.016 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.121 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.105s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.122 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.207 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.695 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.696 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4926MB free_disk=72.37337875366211GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.696 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.697 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:23:18 compute-0 podman[248257]: 2026-01-05 21:23:18.781842309 +0000 UTC m=+0.115189353 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, release=1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., container_name=kepler, vcs-type=git, io.openshift.tags=base rhel9, version=9.4, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container)
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.813 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.813 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 4f980272-c18f-4c66-9c04-8a07a7115de7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.813 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.814 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.869 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.890 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.893 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:23:18 compute-0 nova_compute[186018]: 2026-01-05 21:23:18.894 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:23:19 compute-0 nova_compute[186018]: 2026-01-05 21:23:19.895 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:23:19 compute-0 nova_compute[186018]: 2026-01-05 21:23:19.896 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:23:20 compute-0 nova_compute[186018]: 2026-01-05 21:23:20.305 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:21 compute-0 podman[248278]: 2026-01-05 21:23:21.753733024 +0000 UTC m=+0.101396052 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224)
Jan 05 21:23:22 compute-0 nova_compute[186018]: 2026-01-05 21:23:22.459 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:22 compute-0 nova_compute[186018]: 2026-01-05 21:23:22.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:23:23 compute-0 nova_compute[186018]: 2026-01-05 21:23:23.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:23:25 compute-0 nova_compute[186018]: 2026-01-05 21:23:25.309 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:27 compute-0 nova_compute[186018]: 2026-01-05 21:23:27.463 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:28 compute-0 podman[248299]: 2026-01-05 21:23:28.783271025 +0000 UTC m=+0.126541171 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vcs-type=git, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, release=1755695350, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 05 21:23:28 compute-0 podman[248298]: 2026-01-05 21:23:28.820531663 +0000 UTC m=+0.171161243 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller)
Jan 05 21:23:29 compute-0 podman[202426]: time="2026-01-05T21:23:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:23:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:23:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:23:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:23:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4367 "" "Go-http-client/1.1"
Jan 05 21:23:30 compute-0 nova_compute[186018]: 2026-01-05 21:23:30.311 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:31 compute-0 openstack_network_exporter[205720]: ERROR   21:23:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:23:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:23:31 compute-0 openstack_network_exporter[205720]: ERROR   21:23:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:23:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:23:32 compute-0 nova_compute[186018]: 2026-01-05 21:23:32.466 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:35 compute-0 nova_compute[186018]: 2026-01-05 21:23:35.314 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:35 compute-0 podman[248345]: 2026-01-05 21:23:35.43253807 +0000 UTC m=+0.077551776 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:23:35 compute-0 podman[248344]: 2026-01-05 21:23:35.433884635 +0000 UTC m=+0.091163303 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 05 21:23:37 compute-0 nova_compute[186018]: 2026-01-05 21:23:37.469 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:40 compute-0 nova_compute[186018]: 2026-01-05 21:23:40.317 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:41 compute-0 podman[248384]: 2026-01-05 21:23:41.772997933 +0000 UTC m=+0.122532187 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:23:42 compute-0 sshd-session[246966]: Received disconnect from 38.102.83.164 port 42408:11: disconnected by user
Jan 05 21:23:42 compute-0 sshd-session[246966]: Disconnected from user zuul 38.102.83.164 port 42408
Jan 05 21:23:42 compute-0 sshd-session[246963]: pam_unix(sshd:session): session closed for user zuul
Jan 05 21:23:42 compute-0 nova_compute[186018]: 2026-01-05 21:23:42.471 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:42 compute-0 systemd[1]: session-30.scope: Deactivated successfully.
Jan 05 21:23:42 compute-0 systemd[1]: session-30.scope: Consumed 4.627s CPU time.
Jan 05 21:23:42 compute-0 systemd-logind[788]: Session 30 logged out. Waiting for processes to exit.
Jan 05 21:23:42 compute-0 systemd-logind[788]: Removed session 30.
Jan 05 21:23:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:23:42.860 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:23:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:23:42.861 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:23:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:23:42.862 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:23:45 compute-0 nova_compute[186018]: 2026-01-05 21:23:45.321 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:47 compute-0 nova_compute[186018]: 2026-01-05 21:23:47.473 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:48 compute-0 podman[248407]: 2026-01-05 21:23:48.791354564 +0000 UTC m=+0.128475292 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Jan 05 21:23:48 compute-0 podman[248427]: 2026-01-05 21:23:48.91198293 +0000 UTC m=+0.078059010 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, release=1214.1726694543, io.buildah.version=1.29.0, com.redhat.component=ubi9-container, release-0.7.12=, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, vcs-type=git, version=9.4, io.openshift.expose-services=, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, managed_by=edpm_ansible, name=ubi9, vendor=Red Hat, Inc.)
Jan 05 21:23:50 compute-0 nova_compute[186018]: 2026-01-05 21:23:50.324 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:52 compute-0 nova_compute[186018]: 2026-01-05 21:23:52.475 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:52 compute-0 podman[248449]: 2026-01-05 21:23:52.782643919 +0000 UTC m=+0.120859712 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 05 21:23:55 compute-0 nova_compute[186018]: 2026-01-05 21:23:55.328 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:57 compute-0 nova_compute[186018]: 2026-01-05 21:23:57.478 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:23:59 compute-0 podman[248470]: 2026-01-05 21:23:59.730838176 +0000 UTC m=+0.075473672 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, distribution-scope=public, vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., name=ubi9-minimal, vcs-type=git, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:23:59 compute-0 podman[202426]: time="2026-01-05T21:23:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:23:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:23:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:23:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:23:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4366 "" "Go-http-client/1.1"
Jan 05 21:23:59 compute-0 podman[248469]: 2026-01-05 21:23:59.782343347 +0000 UTC m=+0.130789713 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, tcib_managed=true, managed_by=edpm_ansible)
Jan 05 21:24:00 compute-0 nova_compute[186018]: 2026-01-05 21:24:00.332 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:01 compute-0 openstack_network_exporter[205720]: ERROR   21:24:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:24:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:24:01 compute-0 openstack_network_exporter[205720]: ERROR   21:24:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:24:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:24:02 compute-0 nova_compute[186018]: 2026-01-05 21:24:02.483 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:05 compute-0 nova_compute[186018]: 2026-01-05 21:24:05.335 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:05 compute-0 podman[248513]: 2026-01-05 21:24:05.778589134 +0000 UTC m=+0.117930235 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible)
Jan 05 21:24:05 compute-0 podman[248514]: 2026-01-05 21:24:05.78835207 +0000 UTC m=+0.121403686 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 21:24:07 compute-0 nova_compute[186018]: 2026-01-05 21:24:07.486 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:10 compute-0 nova_compute[186018]: 2026-01-05 21:24:10.338 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:12 compute-0 nova_compute[186018]: 2026-01-05 21:24:12.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:24:12 compute-0 nova_compute[186018]: 2026-01-05 21:24:12.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:24:12 compute-0 nova_compute[186018]: 2026-01-05 21:24:12.463 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:24:12 compute-0 nova_compute[186018]: 2026-01-05 21:24:12.495 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:12 compute-0 podman[248553]: 2026-01-05 21:24:12.806483015 +0000 UTC m=+0.137748226 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:24:13 compute-0 nova_compute[186018]: 2026-01-05 21:24:13.026 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:24:13 compute-0 nova_compute[186018]: 2026-01-05 21:24:13.027 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:24:13 compute-0 nova_compute[186018]: 2026-01-05 21:24:13.027 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:24:13 compute-0 nova_compute[186018]: 2026-01-05 21:24:13.028 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid f64de408-e6d1-4f7f-9f94-e20a4c83a87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:24:14 compute-0 nova_compute[186018]: 2026-01-05 21:24:14.422 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updating instance_info_cache with network_info: [{"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:24:14 compute-0 nova_compute[186018]: 2026-01-05 21:24:14.438 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-f64de408-e6d1-4f7f-9f94-e20a4c83a87a" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:24:14 compute-0 nova_compute[186018]: 2026-01-05 21:24:14.439 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:24:14 compute-0 nova_compute[186018]: 2026-01-05 21:24:14.440 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:24:14 compute-0 nova_compute[186018]: 2026-01-05 21:24:14.440 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:24:15 compute-0 nova_compute[186018]: 2026-01-05 21:24:15.342 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:16 compute-0 nova_compute[186018]: 2026-01-05 21:24:16.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:24:17 compute-0 nova_compute[186018]: 2026-01-05 21:24:17.455 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:24:17 compute-0 nova_compute[186018]: 2026-01-05 21:24:17.494 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:19 compute-0 nova_compute[186018]: 2026-01-05 21:24:19.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:24:19 compute-0 nova_compute[186018]: 2026-01-05 21:24:19.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:24:19 compute-0 nova_compute[186018]: 2026-01-05 21:24:19.495 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:24:19 compute-0 nova_compute[186018]: 2026-01-05 21:24:19.496 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:24:19 compute-0 nova_compute[186018]: 2026-01-05 21:24:19.497 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:24:19 compute-0 nova_compute[186018]: 2026-01-05 21:24:19.498 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:24:19 compute-0 nova_compute[186018]: 2026-01-05 21:24:19.639 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:24:19 compute-0 podman[248578]: 2026-01-05 21:24:19.738596462 +0000 UTC m=+0.078496350 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']})
Jan 05 21:24:19 compute-0 nova_compute[186018]: 2026-01-05 21:24:19.743 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:24:19 compute-0 nova_compute[186018]: 2026-01-05 21:24:19.746 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:24:19 compute-0 podman[248577]: 2026-01-05 21:24:19.749980111 +0000 UTC m=+0.094017848 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-type=git, vendor=Red Hat, Inc., architecture=x86_64, container_name=kepler, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, maintainer=Red Hat, Inc., io.openshift.expose-services=, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:24:19 compute-0 nova_compute[186018]: 2026-01-05 21:24:19.818 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:24:19 compute-0 nova_compute[186018]: 2026-01-05 21:24:19.820 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:24:19 compute-0 nova_compute[186018]: 2026-01-05 21:24:19.881 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:24:19 compute-0 nova_compute[186018]: 2026-01-05 21:24:19.883 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:24:19 compute-0 nova_compute[186018]: 2026-01-05 21:24:19.961 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7/disk.eph0 --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:24:19 compute-0 nova_compute[186018]: 2026-01-05 21:24:19.973 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.032 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.034 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.106 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.108 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.173 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.174 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.239 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a/disk.eph0 --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.345 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.663 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.664 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4873MB free_disk=72.37337875366211GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.665 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.665 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.754 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.755 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 4f980272-c18f-4c66-9c04-8a07a7115de7 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 2, 'MEMORY_MB': 512, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.755 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.755 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1536MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.818 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.832 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.833 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:24:20 compute-0 nova_compute[186018]: 2026-01-05 21:24:20.834 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.169s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:24:21 compute-0 nova_compute[186018]: 2026-01-05 21:24:21.834 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:24:22 compute-0 nova_compute[186018]: 2026-01-05 21:24:22.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:24:22 compute-0 nova_compute[186018]: 2026-01-05 21:24:22.496 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:23 compute-0 podman[248640]: 2026-01-05 21:24:23.790733465 +0000 UTC m=+0.131570944 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:24:24 compute-0 nova_compute[186018]: 2026-01-05 21:24:24.457 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:24:25 compute-0 nova_compute[186018]: 2026-01-05 21:24:25.348 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:25 compute-0 nova_compute[186018]: 2026-01-05 21:24:25.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:24:27 compute-0 nova_compute[186018]: 2026-01-05 21:24:27.498 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:29 compute-0 podman[202426]: time="2026-01-05T21:24:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:24:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:24:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:24:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:24:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4367 "" "Go-http-client/1.1"
Jan 05 21:24:30 compute-0 nova_compute[186018]: 2026-01-05 21:24:30.352 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:30 compute-0 podman[248659]: 2026-01-05 21:24:30.782096565 +0000 UTC m=+0.117317010 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, com.redhat.component=ubi9-minimal-container, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, name=ubi9-minimal, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., architecture=x86_64, container_name=openstack_network_exporter, managed_by=edpm_ansible, release=1755695350)
Jan 05 21:24:30 compute-0 podman[248658]: 2026-01-05 21:24:30.806359451 +0000 UTC m=+0.156091717 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Jan 05 21:24:31 compute-0 openstack_network_exporter[205720]: ERROR   21:24:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:24:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:24:31 compute-0 openstack_network_exporter[205720]: ERROR   21:24:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:24:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:24:32 compute-0 nova_compute[186018]: 2026-01-05 21:24:32.502 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:35 compute-0 nova_compute[186018]: 2026-01-05 21:24:35.356 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:36 compute-0 podman[248703]: 2026-01-05 21:24:36.748656604 +0000 UTC m=+0.086016499 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:24:36 compute-0 podman[248702]: 2026-01-05 21:24:36.788066248 +0000 UTC m=+0.125986337 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 05 21:24:37 compute-0 nova_compute[186018]: 2026-01-05 21:24:37.506 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:40 compute-0 nova_compute[186018]: 2026-01-05 21:24:40.360 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:42 compute-0 nova_compute[186018]: 2026-01-05 21:24:42.507 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:42.861 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:24:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:42.863 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:24:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:42.864 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.436 186022 DEBUG oslo_concurrency.lockutils [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "4f980272-c18f-4c66-9c04-8a07a7115de7" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.436 186022 DEBUG oslo_concurrency.lockutils [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.437 186022 DEBUG oslo_concurrency.lockutils [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.437 186022 DEBUG oslo_concurrency.lockutils [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.437 186022 DEBUG oslo_concurrency.lockutils [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.439 186022 INFO nova.compute.manager [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Terminating instance
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.440 186022 DEBUG nova.compute.manager [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 21:24:43 compute-0 kernel: tap6fba2106-2e (unregistering): left promiscuous mode
Jan 05 21:24:43 compute-0 NetworkManager[56598]: <info>  [1767648283.4973] device (tap6fba2106-2e): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 21:24:43 compute-0 ovn_controller[98229]: 2026-01-05T21:24:43Z|00064|binding|INFO|Releasing lport 6fba2106-2ecf-47b1-ba86-3ca344528342 from this chassis (sb_readonly=0)
Jan 05 21:24:43 compute-0 ovn_controller[98229]: 2026-01-05T21:24:43Z|00065|binding|INFO|Setting lport 6fba2106-2ecf-47b1-ba86-3ca344528342 down in Southbound
Jan 05 21:24:43 compute-0 ovn_controller[98229]: 2026-01-05T21:24:43Z|00066|binding|INFO|Removing iface tap6fba2106-2e ovn-installed in OVS
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.508 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.511 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:43.517 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:71:37:b5 192.168.0.7'], port_security=['fa:16:3e:71:37:b5 192.168.0.7'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'name': 'vnf-scaleup_group-3m37qezpxu27-ozi7dsf63p6s-yfrgspb44fvx-port-z3a4cfes3len', 'neutron:cidrs': '192.168.0.7/24', 'neutron:device_id': '4f980272-c18f-4c66-9c04-8a07a7115de7', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'neutron:port_capabilities': '', 'neutron:port_name': 'vnf-scaleup_group-3m37qezpxu27-ozi7dsf63p6s-yfrgspb44fvx-port-z3a4cfes3len', 'neutron:project_id': '704814115a61471f9b45484171f67b5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '02c7eb5a-98f1-49fb-80bc-9ee05faa964b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.208', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0df9bc1d-5579-4059-ac66-a97b4c7350db, chassis=[], tunnel_key=6, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=6fba2106-2ecf-47b1-ba86-3ca344528342) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:24:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:43.519 107689 INFO neutron.agent.ovn.metadata.agent [-] Port 6fba2106-2ecf-47b1-ba86-3ca344528342 in datapath b871481f-0445-42f2-8b6a-2e8572ae5b49 unbound from our chassis
Jan 05 21:24:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:43.520 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network b871481f-0445-42f2-8b6a-2e8572ae5b49
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.524 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:43.537 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[203ca957-b646-4e0d-86e0-bd812c3b303f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:24:43 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Deactivated successfully.
Jan 05 21:24:43 compute-0 systemd[1]: machine-qemu\x2d4\x2dinstance\x2d00000004.scope: Consumed 1min 58.363s CPU time.
Jan 05 21:24:43 compute-0 systemd-machined[157312]: Machine qemu-4-instance-00000004 terminated.
Jan 05 21:24:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:43.573 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[a1f3ddde-b23c-4348-ab63-6550db30c51a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:24:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:43.576 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[7d3314c1-e467-47ac-947e-f53f57a8fa1e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:24:43 compute-0 podman[248746]: 2026-01-05 21:24:43.599309801 +0000 UTC m=+0.072862433 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:24:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:43.602 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[bc579c36-7c25-4881-a5a6-1453ed6d19f6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:24:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:43.619 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[0833ab8e-d2d4-40e0-b7ce-7a6e763961b5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapb871481f-01'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:97:f0:d4'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 7, 'tx_packets': 15, 'rx_bytes': 574, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 7, 'tx_packets': 15, 'rx_bytes': 574, 'tx_bytes': 774, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 12], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393151, 'reachable_time': 24022, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248777, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:24:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:43.637 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[ab5fef57-8a1a-42f4-b585-a73aa889d4ad]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapb871481f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393170, 'tstamp': 393170}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248778, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 24, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '192.168.0.2'], ['IFA_LOCAL', '192.168.0.2'], ['IFA_BROADCAST', '192.168.0.255'], ['IFA_LABEL', 'tapb871481f-01'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 393175, 'tstamp': 393175}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 248778, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:24:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:43.638 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb871481f-00, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.640 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.647 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:43.647 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapb871481f-00, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:24:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:43.647 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:24:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:43.648 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapb871481f-00, col_values=(('external_ids', {'iface-id': 'a16ac18f-2e71-4427-b368-840ecfba3d33'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:24:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:43.648 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:24:43 compute-0 kernel: tap6fba2106-2e: entered promiscuous mode
Jan 05 21:24:43 compute-0 kernel: tap6fba2106-2e (unregistering): left promiscuous mode
Jan 05 21:24:43 compute-0 NetworkManager[56598]: <info>  [1767648283.6721] manager: (tap6fba2106-2e): new Tun device (/org/freedesktop/NetworkManager/Devices/33)
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.677 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.734 186022 INFO nova.virt.libvirt.driver [-] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Instance destroyed successfully.
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.735 186022 DEBUG nova.objects.instance [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lazy-loading 'resources' on Instance uuid 4f980272-c18f-4c66-9c04-8a07a7115de7 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.755 186022 DEBUG nova.virt.libvirt.vif [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:14:16Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='vn-ezpxu27-ozi7dsf63p6s-yfrgspb44fvx-vnf-pw7hcpks7wak',id=4,image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-05T21:14:25Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={metering.server_group='a6371b97-6a0c-4b37-9443-eaf5410da4a4'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='704814115a61471f9b45484171f67b5f',ramdisk_id='',reservation_id='r-jvficg90',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member,admin',image_base_image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T21:14:25Z,user_data='Q29udGVudC1UeXBlOiBtdWx0aXBhcnQvbWl4ZWQ7IGJvdW5kYXJ5PSI9PT09PT09PT09PT09PT0xMjI2Nzc4MDIzODAwNDE3Njg4PT0iCk1JTUUtVmVyc2lvbjogMS4wCgotLT09PT09PT09PT09PT09PTEyMjY3NzgwMjM4MDA0MTc2ODg9PQpDb250ZW50LVR5cGU6IHRleHQvY2xvdWQtY29uZmlnOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2xvdWQtY29uZmlnIgoKCgojIENhcHR1cmUgYWxsIHN1YnByb2Nlc3Mgb3V0cHV0IGludG8gYSBsb2dmaWxlCiMgVXNlZnVsIGZvciB0cm91Ymxlc2hvb3RpbmcgY2xvdWQtaW5pdCBpc3N1ZXMKb3V0cHV0OiB7YWxsOiAnfCB0ZWUgLWEgL3Zhci9sb2cvY2xvdWQtaW5pdC1vdXRwdXQubG9nJ30KCi0tPT09PT09PT09PT09PT09MTIyNjc3ODAyMzgwMDQxNzY4OD09CkNvbnRlbnQtVHlwZTogdGV4dC9jbG91ZC1ib290aG9vazsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImJvb3Rob29rLnNoIgoKIyEvdXNyL2Jpbi9iYXNoCgojIEZJWE1FKHNoYWRvd2VyKSB0aGlzIGlzIGEgd29ya2Fyb3VuZCBmb3IgY2xvdWQtaW5pdCAwLjYuMyBwcmVzZW50IGluIFVidW50dQojIDEyLjA0IExUUzoKIyBodHRwczovL2J1Z3MubGF1bmNocGFkLm5ldC9oZWF0LytidWcvMTI1NzQxMAojCiMgVGhlIG9sZCBjbG91ZC1pbml0IGRvZXNuJ3QgY3JlYXRlIHRoZSB1c2VycyBkaXJlY3RseSBzbyB0aGUgY29tbWFuZHMgdG8gZG8KIyB0aGlzIGFyZSBpbmplY3RlZCB0aG91Z2ggbm92YV91dGlscy5weS4KIwojIE9uY2Ugd2UgZHJvcCBzdXBwb3J0IGZvciAwLjYuMywgd2UgY2FuIHNhZmVseSByZW1vdmUgdGhpcy4KCgojIGluIGNhc2UgaGVhdC1jZm50b29scyBoYXMgYmVlbiBpbnN0YWxsZWQgZnJvbSBwYWNrYWdlIGJ1dCBubyBzeW1saW5rcwojIGFyZSB5ZXQgaW4gL29wdC9hd3MvYmluLwpjZm4tY3JlYXRlLWF3cy1zeW1saW5rcwoKIyBEbyBub3QgcmVtb3ZlIC0gdGhlIGNsb3VkIGJvb3Rob29rIHNob3VsZCBhbHdheXMgcmV0dXJuIHN1Y2Nlc3MKZXhpdCAwCgotLT09PT09PT09PT09PT09PTEyMjY3NzgwMjM4MDA0MTc2ODg9PQpDb250ZW50LVR5cGU6IHRleHQvcGFydC1oYW5kbGVyOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0icGFydC1oYW5kbGVyLnB5IgoKIyBwYXJ0LWhhbmRsZXIKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBvcwppbXBvcnQgc3lzCgoKZGVmIGxpc3RfdHlwZXMoKToKICAgIHJldHVybiBbInRleHQveC1jZm5pbml0ZGF0YSJdCgoKZGVmIGhhbmRsZV9wYXJ0KGRhdGEsIGN0eXBlLCBmaWxlbmFtZSwgcGF5bG9hZCk6CiAgICBpZiBjdHlwZSA9PSAiX19iZWdpbl9fIjoKICAgICAgICB0cnk6CiAgICAgICAgICAgIG9zLm1ha2VkaXJzKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzJywgaW50KCI3MDAiLCA4KSkKICAgICAgICBleGNlcHQgT1NFcnJvcjoKICAgICAgICAgICAgZXhfdHlwZSwgZSwgdGIgPSBzeXMuZXhjX2luZm8oKQogICAgICAgICAgICBpZiBlLmVycm5vICE9IGVycm5vLkVFWElTVDoKICAgICAgICAgICAgICAgIHJhaXNlCiAgICAgICAgcmV0dXJuCgogICAgaWYgY3R5cGUgPT0gIl9fZW5kX18iOgogICAgICAgIHJldHVybgoKICAgIHRpbWVzdGFtcCA9IGRhdGV0aW1lLmRhdGV0aW1lLm5vdygpCiAgICB3aXRoIG9wZW4oJy92YXIvbG9nL3BhcnQtaGFuZGxlci5sb2cnLCAnYScpIGFzIGxvZzoKICAgICAgICBsb2cud3JpdGUoJyVzIGZpbGVuYW1lOiVzLCBjdHlwZTolc1xuJyAlICh0aW1lc3RhbXAsIGZpbGVuYW1lLCBjdHlwZSkpCgogICAgaWYgY3R5cGUgPT0gJ3RleHQveC1jZm5pbml0ZGF0YSc6CiAgICAgICAgd2l0aCBvcGVuKCcvdmFyL2xpYi9oZWF0LWNmbnRvb2xzLyVzJyAlIGZpbGVuYW1lLCAndycpIGFzIGY6CiAgICAgICAgICAgIGYud3JpdGUocGF5bG9hZCkKCiAgICAgICAgIyBUT0RPKHNkYWtlKSBob3BlZnVsbHkgdGVtcG9yYXJ5IHVudGlsIHVzZXJzIG1vdmUgdG8gaGVhdC1jZm50b29scy0xLjMKICAgICAgICB3aXRoIG9wZW4oJy92YXIvbGliL2Nsb3VkL2RhdGEvJXMnICUgZmlsZW5hbWUsICd3JykgYXMgZjoKICAgICAgICAgICAgZi53cml0ZShwYXlsb2FkKQoKLS09PT09PT09PT09PT09PT0xMjI2Nzc4MDIzODAwNDE3Njg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtY2ZuaW5pdGRhdGE7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJjZm4tdXNlcmRhdGEiCgoKLS09PT09PT09PT09PT09PT0xMjI2Nzc4MDIzODAwNDE3Njg4PT0KQ29udGVudC1UeXBlOiB0ZXh0L3gtc2hlbGxzY3JpcHQ7IGNoYXJzZXQ9InVzLWFzY2lpIgpNSU1FLVZlcnNpb246IDEuMApDb250ZW50LVRyYW5zZmVyLUVuY29kaW5nOiA3Yml0CkNvbnRlbnQtRGlzcG9zaXRpb246IGF0dGFjaG1lbnQ7IGZpbGVuYW1lPSJsb2d1c2VyZGF0YS5weSIKCiMhL3Vzci9iaW4vZW52IHB5dGhvbjMKIwojICAgIExpY2Vuc2VkIHVuZGVyIHRoZSBBcGFjaGUgTGljZW5zZSwgVmVyc2lvbiAyLjAgKHRoZSAiTGljZW5zZSIpOyB5b3UgbWF5CiMgICAgbm90IHVzZSB0aGlzIGZpbGUgZXhjZXB0IGluIGNvbXBsaWFuY2Ugd2l0aCB0aGUgTGljZW5zZS4gWW91IG1heSBvYnRhaW4KIyAgICBhIGNvcHkgb2YgdGhlIExpY2Vuc2UgYXQKIwojICAgICAgICAgaHR0cDovL3d3dy5hcGFjaGUub3JnL2xpY2Vuc2VzL0xJQ0VOU0UtMi4wCiMKIyAgICBVbmxlc3MgcmVxdWlyZWQgYnkgYXBwbGljYWJsZSBsYXcgb3IgYWdyZWVkIHRvIGluIHdyaXRpbmcsIHNvZnR3YXJlCiMgICAgZGlzdHJpYnV0ZWQgdW5kZXIgdGhlIExpY2Vuc2UgaXMgZGlzdHJpYnV0ZWQgb24gYW4gIkFTIElTIiBCQVNJUywgV0lUSE9VVAojICAgIFdBUlJBTlRJRVMgT1IgQ09ORElUSU9OUyBPRiBBTlkgS0lORCwgZWl0aGVyIGV4cHJlc3Mgb3IgaW1wbGllZC4gU2VlIHRoZQojICAgIExpY2Vuc2UgZm9yIHRoZSBzcGVjaWZpYyBsYW5ndWFnZSBnb3Zlcm5pbmcgcGVybWlzc2lvbnMgYW5kIGxpbWl0YXRpb25zCiMgICAgdW5kZXIgdGhlIExpY2Vuc2UuCgppbXBvcnQgZGF0ZXRpbWUKaW1wb3J0IGVycm5vCmltcG9ydCBsb2dnaW5nCmltcG9ydCBvcwppbXBvcnQgc3VicHJvY2VzcwppbXBvcnQgc3lzCgoKVkFSX1BBVEggPSAnL3Zhci9saWIvaGVhdC1jZm50b29scycKTE9HID0gbG9nZ2luZy5nZXRMb2dnZXIoJ2hlYXQtcHJvdmlzaW9uJykKCgpkZWYgaW5pdF9sb2dnaW5nKCk6CiAgICBMT0cuc2V0TGV2ZWwobG9nZ2luZy5JTkZPKQogICAgTE9HLmFkZEhhbmRsZXIobG9nZ2luZy5TdHJlYW1IYW5kbGVyKCkpCiAgICBmaCA9IGxvZ2dpbmcuRmlsZUhhbmRsZXIoIi92YXIvbG9nL2hlYXQtcHJvdmlzaW9uLmxvZyIpCiAgICBvcy5jaG1vZChmaC5iYXNlRmlsZW5hbWUsIGludCgiNjAwIiwgOCkpCiAgICBMT0cuYWRkSGFuZGxlcihmaCkKCgpkZWYgY2FsbChhcmdzKToKCiAgICBjbGFzcyBMb2dTdHJlYW0ob2JqZWN0KToKCiAgICAgICAgZGVmIHdyaXRlKHNlbGYsIGRhdGEpOgogICAgICAgICAgICBMT0cuaW5mbyhkYXRhKQoKICAgIExPRy5pbmZvK
Jan 05 21:24:43 compute-0 nova_compute[186018]: Cclc1xuJywgJyAnLmpvaW4oYXJncykpICAjIG5vcWEKICAgIHRyeToKICAgICAgICBscyA9IExvZ1N0cmVhbSgpCiAgICAgICAgcCA9IHN1YnByb2Nlc3MuUG9wZW4oYXJncywgc3Rkb3V0PXN1YnByb2Nlc3MuUElQRSwKICAgICAgICAgICAgICAgICAgICAgICAgICAgICBzdGRlcnI9c3VicHJvY2Vzcy5QSVBFKQogICAgICAgIGRhdGEgPSBwLmNvbW11bmljYXRlKCkKICAgICAgICBpZiBkYXRhOgogICAgICAgICAgICBmb3IgeCBpbiBkYXRhOgogICAgICAgICAgICAgICAgbHMud3JpdGUoeCkKICAgIGV4Y2VwdCBPU0Vycm9yOgogICAgICAgIGV4X3R5cGUsIGV4LCB0YiA9IHN5cy5leGNfaW5mbygpCiAgICAgICAgaWYgZXguZXJybm8gPT0gZXJybm8uRU5PRVhFQzoKICAgICAgICAgICAgTE9HLmVycm9yKCdVc2VyZGF0YSBlbXB0eSBvciBub3QgZXhlY3V0YWJsZTogJXMnLCBleCkKICAgICAgICAgICAgcmV0dXJuIG9zLkVYX09LCiAgICAgICAgZWxzZToKICAgICAgICAgICAgTE9HLmVycm9yKCdPUyBlcnJvciBydW5uaW5nIHVzZXJkYXRhOiAlcycsIGV4KQogICAgICAgICAgICByZXR1cm4gb3MuRVhfT1NFUlIKICAgIGV4Y2VwdCBFeGNlcHRpb246CiAgICAgICAgZXhfdHlwZSwgZXgsIHRiID0gc3lzLmV4Y19pbmZvKCkKICAgICAgICBMT0cuZXJyb3IoJ1Vua25vd24gZXJyb3IgcnVubmluZyB1c2VyZGF0YTogJXMnLCBleCkKICAgICAgICByZXR1cm4gb3MuRVhfU09GVFdBUkUKICAgIHJldHVybiBwLnJldHVybmNvZGUKCgpkZWYgbWFpbigpOgogICAgdXNlcmRhdGFfcGF0aCA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ2Nmbi11c2VyZGF0YScpCiAgICBvcy5jaG1vZCh1c2VyZGF0YV9wYXRoLCBpbnQoIjcwMCIsIDgpKQoKICAgIExPRy5pbmZvKCdQcm92aXNpb24gYmVnYW46ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICByZXR1cm5jb2RlID0gY2FsbChbdXNlcmRhdGFfcGF0aF0pCiAgICBMT0cuaW5mbygnUHJvdmlzaW9uIGRvbmU6ICVzJywgZGF0ZXRpbWUuZGF0ZXRpbWUubm93KCkpCiAgICBpZiByZXR1cm5jb2RlOgogICAgICAgIHJldHVybiByZXR1cm5jb2RlCgoKaWYgX19uYW1lX18gPT0gJ19fbWFpbl9fJzoKICAgIGluaXRfbG9nZ2luZygpCgogICAgY29kZSA9IG1haW4oKQogICAgaWYgY29kZToKICAgICAgICBMT0cuZXJyb3IoJ1Byb3Zpc2lvbiBmYWlsZWQgd2l0aCBleGl0IGNvZGUgJXMnLCBjb2RlKQogICAgICAgIHN5cy5leGl0KGNvZGUpCgogICAgcHJvdmlzaW9uX2xvZyA9IG9zLnBhdGguam9pbihWQVJfUEFUSCwgJ3Byb3Zpc2lvbi1maW5pc2hlZCcpCiAgICAjIHRvdWNoIHRoZSBmaWxlIHNvIGl0IGlzIHRpbWVzdGFtcGVkIHdpdGggd2hlbiBmaW5pc2hlZAogICAgd2l0aCBvcGVuKHByb3Zpc2lvbl9sb2csICdhJyk6CiAgICAgICAgb3MudXRpbWUocHJvdmlzaW9uX2xvZywgTm9uZSkKCi0tPT09PT09PT09PT09PT09MTIyNjc3ODAyMzgwMDQxNzY4OD09CkNvbnRlbnQtVHlwZTogdGV4dC94LWNmbmluaXRkYXRhOyBjaGFyc2V0PSJ1cy1hc2NpaSIKTUlNRS1WZXJzaW9uOiAxLjAKQ29udGVudC1UcmFuc2Zlci1FbmNvZGluZzogN2JpdApDb250ZW50LURpc3Bvc2l0aW9uOiBhdHRhY2htZW50OyBmaWxlbmFtZT0iY2ZuLW1ldGFkYXRhLXNlcnZlciIKCmh0dHBzOi8vaGVhdC1jZm5hcGktaW50ZXJuYWwub3BlbnN0YWNrLnN2Yzo4MDAwL3YxLwotLT09PT09PT09PT09PT09PTEyMjY3NzgwMjM4MDA0MTc2ODg9PQpDb250ZW50LVR5cGU6IHRleHQveC1jZm5pbml0ZGF0YTsgY2hhcnNldD0idXMtYXNjaWkiCk1JTUUtVmVyc2lvbjogMS4wCkNvbnRlbnQtVHJhbnNmZXItRW5jb2Rpbmc6IDdiaXQKQ29udGVudC1EaXNwb3NpdGlvbjogYXR0YWNobWVudDsgZmlsZW5hbWU9ImNmbi1ib3RvLWNmZyIKCltCb3RvXQpkZWJ1ZyA9IDAKaXNfc2VjdXJlID0gMApodHRwc192YWxpZGF0ZV9jZXJ0aWZpY2F0ZXMgPSAxCmNmbl9yZWdpb25fbmFtZSA9IGhlYXQKY2ZuX3JlZ2lvbl9lbmRwb2ludCA9IGhlYXQtY2ZuYXBpLWludGVybmFsLm9wZW5zdGFjay5zdmMKLS09PT09PT09PT09PT09PT0xMjI2Nzc4MDIzODAwNDE3Njg4PT0tLQo=',user_id='41f377b42540490198f271301cf5fe90',uuid=4f980272-c18f-4c66-9c04-8a07a7115de7,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "6fba2106-2ecf-47b1-ba86-3ca344528342", "address": "fa:16:3e:71:37:b5", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fba2106-2e", "ovs_interfaceid": "6fba2106-2ecf-47b1-ba86-3ca344528342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.756 186022 DEBUG nova.network.os_vif_util [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converting VIF {"id": "6fba2106-2ecf-47b1-ba86-3ca344528342", "address": "fa:16:3e:71:37:b5", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.7", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.208", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap6fba2106-2e", "ovs_interfaceid": "6fba2106-2ecf-47b1-ba86-3ca344528342", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": true, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.756 186022 DEBUG nova.network.os_vif_util [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:71:37:b5,bridge_name='br-int',has_traffic_filtering=True,id=6fba2106-2ecf-47b1-ba86-3ca344528342,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6fba2106-2e') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.757 186022 DEBUG os_vif [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:37:b5,bridge_name='br-int',has_traffic_filtering=True,id=6fba2106-2ecf-47b1-ba86-3ca344528342,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6fba2106-2e') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.758 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.759 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap6fba2106-2e, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.761 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.762 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.763 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.765 186022 INFO os_vif [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:71:37:b5,bridge_name='br-int',has_traffic_filtering=True,id=6fba2106-2ecf-47b1-ba86-3ca344528342,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=True,vif_name='tap6fba2106-2e')
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.766 186022 INFO nova.virt.libvirt.driver [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Deleting instance files /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7_del
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.766 186022 INFO nova.virt.libvirt.driver [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Deletion of /var/lib/nova/instances/4f980272-c18f-4c66-9c04-8a07a7115de7_del complete
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.796 186022 DEBUG nova.compute.manager [req-82ae9568-01f5-46ed-8d90-6fca3ddc3ab0 req-c78cbc9d-701e-4cc3-b807-6649071125e0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Received event network-vif-unplugged-6fba2106-2ecf-47b1-ba86-3ca344528342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.796 186022 DEBUG oslo_concurrency.lockutils [req-82ae9568-01f5-46ed-8d90-6fca3ddc3ab0 req-c78cbc9d-701e-4cc3-b807-6649071125e0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.797 186022 DEBUG oslo_concurrency.lockutils [req-82ae9568-01f5-46ed-8d90-6fca3ddc3ab0 req-c78cbc9d-701e-4cc3-b807-6649071125e0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.797 186022 DEBUG oslo_concurrency.lockutils [req-82ae9568-01f5-46ed-8d90-6fca3ddc3ab0 req-c78cbc9d-701e-4cc3-b807-6649071125e0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.797 186022 DEBUG nova.compute.manager [req-82ae9568-01f5-46ed-8d90-6fca3ddc3ab0 req-c78cbc9d-701e-4cc3-b807-6649071125e0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] No waiting events found dispatching network-vif-unplugged-6fba2106-2ecf-47b1-ba86-3ca344528342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.797 186022 DEBUG nova.compute.manager [req-82ae9568-01f5-46ed-8d90-6fca3ddc3ab0 req-c78cbc9d-701e-4cc3-b807-6649071125e0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Received event network-vif-unplugged-6fba2106-2ecf-47b1-ba86-3ca344528342 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.829 186022 INFO nova.compute.manager [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Took 0.39 seconds to destroy the instance on the hypervisor.
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.830 186022 DEBUG oslo.service.loopingcall [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.830 186022 DEBUG nova.compute.manager [-] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.830 186022 DEBUG nova.network.neutron [-] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 21:24:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:43.867 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=9, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=8) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:24:43 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:43.868 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:24:43 compute-0 nova_compute[186018]: 2026-01-05 21:24:43.868 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:44 compute-0 rsyslogd[237695]: message too long (8192) with configured size 8096, begin of message is: 2026-01-05 21:24:43.755 186022 DEBUG nova.virt.libvirt.vif [None req-2cab3775-5c [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:24:45 compute-0 nova_compute[186018]: 2026-01-05 21:24:45.879 186022 DEBUG nova.compute.manager [req-4d7dbe63-2b5f-4ccc-98b4-22728b853921 req-bf1435ef-93fe-481e-92e6-42826adff3ba 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Received event network-vif-plugged-6fba2106-2ecf-47b1-ba86-3ca344528342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:24:45 compute-0 nova_compute[186018]: 2026-01-05 21:24:45.880 186022 DEBUG oslo_concurrency.lockutils [req-4d7dbe63-2b5f-4ccc-98b4-22728b853921 req-bf1435ef-93fe-481e-92e6-42826adff3ba 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:24:45 compute-0 nova_compute[186018]: 2026-01-05 21:24:45.880 186022 DEBUG oslo_concurrency.lockutils [req-4d7dbe63-2b5f-4ccc-98b4-22728b853921 req-bf1435ef-93fe-481e-92e6-42826adff3ba 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:24:45 compute-0 nova_compute[186018]: 2026-01-05 21:24:45.880 186022 DEBUG oslo_concurrency.lockutils [req-4d7dbe63-2b5f-4ccc-98b4-22728b853921 req-bf1435ef-93fe-481e-92e6-42826adff3ba 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:24:45 compute-0 nova_compute[186018]: 2026-01-05 21:24:45.881 186022 DEBUG nova.compute.manager [req-4d7dbe63-2b5f-4ccc-98b4-22728b853921 req-bf1435ef-93fe-481e-92e6-42826adff3ba 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] No waiting events found dispatching network-vif-plugged-6fba2106-2ecf-47b1-ba86-3ca344528342 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:24:45 compute-0 nova_compute[186018]: 2026-01-05 21:24:45.881 186022 WARNING nova.compute.manager [req-4d7dbe63-2b5f-4ccc-98b4-22728b853921 req-bf1435ef-93fe-481e-92e6-42826adff3ba 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Received unexpected event network-vif-plugged-6fba2106-2ecf-47b1-ba86-3ca344528342 for instance with vm_state active and task_state deleting.
Jan 05 21:24:47 compute-0 nova_compute[186018]: 2026-01-05 21:24:47.206 186022 DEBUG nova.compute.manager [req-a3a0c2d1-a461-403f-b029-7283a4706ca6 req-ddb2d736-c991-4059-b0ba-399e49f2cf9a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Received event network-changed-6fba2106-2ecf-47b1-ba86-3ca344528342 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:24:47 compute-0 nova_compute[186018]: 2026-01-05 21:24:47.206 186022 DEBUG nova.compute.manager [req-a3a0c2d1-a461-403f-b029-7283a4706ca6 req-ddb2d736-c991-4059-b0ba-399e49f2cf9a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Refreshing instance network info cache due to event network-changed-6fba2106-2ecf-47b1-ba86-3ca344528342. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:24:47 compute-0 nova_compute[186018]: 2026-01-05 21:24:47.206 186022 DEBUG oslo_concurrency.lockutils [req-a3a0c2d1-a461-403f-b029-7283a4706ca6 req-ddb2d736-c991-4059-b0ba-399e49f2cf9a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:24:47 compute-0 nova_compute[186018]: 2026-01-05 21:24:47.206 186022 DEBUG oslo_concurrency.lockutils [req-a3a0c2d1-a461-403f-b029-7283a4706ca6 req-ddb2d736-c991-4059-b0ba-399e49f2cf9a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:24:47 compute-0 nova_compute[186018]: 2026-01-05 21:24:47.207 186022 DEBUG nova.network.neutron [req-a3a0c2d1-a461-403f-b029-7283a4706ca6 req-ddb2d736-c991-4059-b0ba-399e49f2cf9a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Refreshing network info cache for port 6fba2106-2ecf-47b1-ba86-3ca344528342 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:24:47 compute-0 nova_compute[186018]: 2026-01-05 21:24:47.366 186022 INFO nova.network.neutron [req-a3a0c2d1-a461-403f-b029-7283a4706ca6 req-ddb2d736-c991-4059-b0ba-399e49f2cf9a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Port 6fba2106-2ecf-47b1-ba86-3ca344528342 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 05 21:24:47 compute-0 nova_compute[186018]: 2026-01-05 21:24:47.366 186022 DEBUG nova.network.neutron [req-a3a0c2d1-a461-403f-b029-7283a4706ca6 req-ddb2d736-c991-4059-b0ba-399e49f2cf9a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:24:47 compute-0 nova_compute[186018]: 2026-01-05 21:24:47.398 186022 DEBUG oslo_concurrency.lockutils [req-a3a0c2d1-a461-403f-b029-7283a4706ca6 req-ddb2d736-c991-4059-b0ba-399e49f2cf9a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-4f980272-c18f-4c66-9c04-8a07a7115de7" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:24:47 compute-0 nova_compute[186018]: 2026-01-05 21:24:47.510 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:47 compute-0 nova_compute[186018]: 2026-01-05 21:24:47.634 186022 DEBUG nova.network.neutron [-] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:24:47 compute-0 nova_compute[186018]: 2026-01-05 21:24:47.723 186022 INFO nova.compute.manager [-] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Took 3.89 seconds to deallocate network for instance.
Jan 05 21:24:47 compute-0 nova_compute[186018]: 2026-01-05 21:24:47.885 186022 DEBUG oslo_concurrency.lockutils [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:24:47 compute-0 nova_compute[186018]: 2026-01-05 21:24:47.886 186022 DEBUG oslo_concurrency.lockutils [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:24:47 compute-0 nova_compute[186018]: 2026-01-05 21:24:47.990 186022 DEBUG nova.compute.provider_tree [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:24:48 compute-0 nova_compute[186018]: 2026-01-05 21:24:48.010 186022 DEBUG nova.scheduler.client.report [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:24:48 compute-0 nova_compute[186018]: 2026-01-05 21:24:48.063 186022 DEBUG oslo_concurrency.lockutils [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.178s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:24:48 compute-0 nova_compute[186018]: 2026-01-05 21:24:48.110 186022 INFO nova.scheduler.client.report [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Deleted allocations for instance 4f980272-c18f-4c66-9c04-8a07a7115de7
Jan 05 21:24:48 compute-0 nova_compute[186018]: 2026-01-05 21:24:48.188 186022 DEBUG oslo_concurrency.lockutils [None req-2cab3775-5c2a-40d6-a006-57357c5ad02f 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "4f980272-c18f-4c66-9c04-8a07a7115de7" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.752s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:24:48 compute-0 nova_compute[186018]: 2026-01-05 21:24:48.762 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:50 compute-0 podman[248801]: 2026-01-05 21:24:50.743377709 +0000 UTC m=+0.090546307 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Jan 05 21:24:50 compute-0 podman[248800]: 2026-01-05 21:24:50.791838661 +0000 UTC m=+0.141144835 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, com.redhat.component=ubi9-container, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.4, managed_by=edpm_ansible, release=1214.1726694543, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 05 21:24:52 compute-0 nova_compute[186018]: 2026-01-05 21:24:52.513 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:52 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:24:52.870 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '9'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:24:53 compute-0 nova_compute[186018]: 2026-01-05 21:24:53.766 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:54 compute-0 podman[248841]: 2026-01-05 21:24:54.800034352 +0000 UTC m=+0.090348562 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4)
Jan 05 21:24:57 compute-0 nova_compute[186018]: 2026-01-05 21:24:57.516 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:58 compute-0 nova_compute[186018]: 2026-01-05 21:24:58.732 186022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767648283.7298937, 4f980272-c18f-4c66-9c04-8a07a7115de7 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:24:58 compute-0 nova_compute[186018]: 2026-01-05 21:24:58.733 186022 INFO nova.compute.manager [-] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] VM Stopped (Lifecycle Event)
Jan 05 21:24:58 compute-0 nova_compute[186018]: 2026-01-05 21:24:58.757 186022 DEBUG nova.compute.manager [None req-7e41c809-37fa-44ae-8aa7-4e5e33fad3e6 - - - - - -] [instance: 4f980272-c18f-4c66-9c04-8a07a7115de7] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:24:58 compute-0 nova_compute[186018]: 2026-01-05 21:24:58.768 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:24:59 compute-0 podman[202426]: time="2026-01-05T21:24:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:24:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:24:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:24:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:24:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4363 "" "Go-http-client/1.1"
Jan 05 21:25:01 compute-0 openstack_network_exporter[205720]: ERROR   21:25:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:25:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:25:01 compute-0 openstack_network_exporter[205720]: ERROR   21:25:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:25:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:25:01 compute-0 podman[248863]: 2026-01-05 21:25:01.740333643 +0000 UTC m=+0.086349037 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, container_name=openstack_network_exporter, distribution-scope=public, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, version=9.6, com.redhat.component=ubi9-minimal-container, maintainer=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, release=1755695350, vendor=Red Hat, Inc., io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, config_id=openstack_network_exporter)
Jan 05 21:25:01 compute-0 podman[248862]: 2026-01-05 21:25:01.763896491 +0000 UTC m=+0.115084521 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.318 186022 DEBUG oslo_concurrency.lockutils [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.319 186022 DEBUG oslo_concurrency.lockutils [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.319 186022 DEBUG oslo_concurrency.lockutils [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.319 186022 DEBUG oslo_concurrency.lockutils [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.320 186022 DEBUG oslo_concurrency.lockutils [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.321 186022 INFO nova.compute.manager [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Terminating instance
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.322 186022 DEBUG nova.compute.manager [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 21:25:02 compute-0 kernel: tap9f21c713-15 (unregistering): left promiscuous mode
Jan 05 21:25:02 compute-0 NetworkManager[56598]: <info>  [1767648302.3625] device (tap9f21c713-15): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 21:25:02 compute-0 ovn_controller[98229]: 2026-01-05T21:25:02Z|00067|binding|INFO|Releasing lport 9f21c713-156d-4cef-99ef-70022fb8e58b from this chassis (sb_readonly=0)
Jan 05 21:25:02 compute-0 ovn_controller[98229]: 2026-01-05T21:25:02Z|00068|binding|INFO|Setting lport 9f21c713-156d-4cef-99ef-70022fb8e58b down in Southbound
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.372 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:02 compute-0 ovn_controller[98229]: 2026-01-05T21:25:02Z|00069|binding|INFO|Removing iface tap9f21c713-15 ovn-installed in OVS
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.374 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:02.380 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:98:b1:c7 192.168.0.17'], port_security=['fa:16:3e:98:b1:c7 192.168.0.17'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '192.168.0.17/24', 'neutron:device_id': 'f64de408-e6d1-4f7f-9f94-e20a4c83a87a', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '704814115a61471f9b45484171f67b5f', 'neutron:revision_number': '4', 'neutron:security_group_ids': '02c7eb5a-98f1-49fb-80bc-9ee05faa964b', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.227'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=0df9bc1d-5579-4059-ac66-a97b4c7350db, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=9f21c713-156d-4cef-99ef-70022fb8e58b) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:25:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:02.381 107689 INFO neutron.agent.ovn.metadata.agent [-] Port 9f21c713-156d-4cef-99ef-70022fb8e58b in datapath b871481f-0445-42f2-8b6a-2e8572ae5b49 unbound from our chassis
Jan 05 21:25:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:02.382 107689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network b871481f-0445-42f2-8b6a-2e8572ae5b49, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 05 21:25:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:02.383 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[c58cf23e-0eb0-4f59-b060-8b91529b1c5d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:25:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:02.384 107689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49 namespace which is not needed anymore
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.391 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:02 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Deactivated successfully.
Jan 05 21:25:02 compute-0 systemd[1]: machine-qemu\x2d1\x2dinstance\x2d00000001.scope: Consumed 3min 15.614s CPU time.
Jan 05 21:25:02 compute-0 systemd-machined[157312]: Machine qemu-1-instance-00000001 terminated.
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.517 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:02 compute-0 neutron-haproxy-ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49[240632]: [NOTICE]   (240637) : haproxy version is 2.8.14-c23fe91
Jan 05 21:25:02 compute-0 neutron-haproxy-ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49[240632]: [NOTICE]   (240637) : path to executable is /usr/sbin/haproxy
Jan 05 21:25:02 compute-0 neutron-haproxy-ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49[240632]: [WARNING]  (240637) : Exiting Master process...
Jan 05 21:25:02 compute-0 neutron-haproxy-ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49[240632]: [ALERT]    (240637) : Current worker (240639) exited with code 143 (Terminated)
Jan 05 21:25:02 compute-0 neutron-haproxy-ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49[240632]: [WARNING]  (240637) : All workers exited. Exiting... (0)
Jan 05 21:25:02 compute-0 systemd[1]: libpod-233717ab13ddd74f7a4eca20c3a8fa2832e22941efa44351559dfcb3517e1b01.scope: Deactivated successfully.
Jan 05 21:25:02 compute-0 podman[248931]: 2026-01-05 21:25:02.53632522 +0000 UTC m=+0.056204096 container died 233717ab13ddd74f7a4eca20c3a8fa2832e22941efa44351559dfcb3517e1b01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:25:02 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-233717ab13ddd74f7a4eca20c3a8fa2832e22941efa44351559dfcb3517e1b01-userdata-shm.mount: Deactivated successfully.
Jan 05 21:25:02 compute-0 systemd[1]: var-lib-containers-storage-overlay-cda2b6ea0f460aebf5e928b181d66261e487beffffac1eb57115d75f78f4611c-merged.mount: Deactivated successfully.
Jan 05 21:25:02 compute-0 podman[248931]: 2026-01-05 21:25:02.585685075 +0000 UTC m=+0.105563971 container cleanup 233717ab13ddd74f7a4eca20c3a8fa2832e22941efa44351559dfcb3517e1b01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3)
Jan 05 21:25:02 compute-0 systemd[1]: libpod-conmon-233717ab13ddd74f7a4eca20c3a8fa2832e22941efa44351559dfcb3517e1b01.scope: Deactivated successfully.
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.611 186022 INFO nova.virt.libvirt.driver [-] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Instance destroyed successfully.
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.611 186022 DEBUG nova.objects.instance [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lazy-loading 'resources' on Instance uuid f64de408-e6d1-4f7f-9f94-e20a4c83a87a obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.640 186022 DEBUG nova.virt.libvirt.vif [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:06:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='test_0',display_name='test_0',ec2_ids=<?>,ephemeral_gb=1,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(1),hidden=False,host='compute-0.ctlplane.example.com',hostname='test-0',id=1,image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',info_cache=InstanceInfoCache,instance_type_id=1,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-05T21:06:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=512,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='704814115a61471f9b45484171f67b5f',ramdisk_id='',reservation_id='r-i94me5j7',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='admin,member,reader',image_base_image_ref='31cf9c34-2e56-49e9-bb98-955ac3cf9185',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',image_owner_specified.openstack.md5='',image_owner_specified.openstack.object='images/cirros',image_owner_specified.openstack.sha256='',owner_project_name='admin',owner_user_name='admin'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T21:06:27Z,user_data=None,user_id='41f377b42540490198f271301cf5fe90',uuid=f64de408-e6d1-4f7f-9f94-e20a4c83a87a,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.641 186022 DEBUG nova.network.os_vif_util [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converting VIF {"id": "9f21c713-156d-4cef-99ef-70022fb8e58b", "address": "fa:16:3e:98:b1:c7", "network": {"id": "b871481f-0445-42f2-8b6a-2e8572ae5b49", "bridge": "br-int", "label": "private", "subnets": [{"cidr": "192.168.0.0/24", "dns": [], "gateway": {"address": "192.168.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "192.168.0.17", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.227", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "704814115a61471f9b45484171f67b5f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9f21c713-15", "ovs_interfaceid": "9f21c713-156d-4cef-99ef-70022fb8e58b", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.642 186022 DEBUG nova.network.os_vif_util [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:98:b1:c7,bridge_name='br-int',has_traffic_filtering=True,id=9f21c713-156d-4cef-99ef-70022fb8e58b,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f21c713-15') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.643 186022 DEBUG os_vif [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:b1:c7,bridge_name='br-int',has_traffic_filtering=True,id=9f21c713-156d-4cef-99ef-70022fb8e58b,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f21c713-15') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.644 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.644 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9f21c713-15, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.653 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.655 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.659 186022 INFO os_vif [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:98:b1:c7,bridge_name='br-int',has_traffic_filtering=True,id=9f21c713-156d-4cef-99ef-70022fb8e58b,network=Network(b871481f-0445-42f2-8b6a-2e8572ae5b49),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9f21c713-15')
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.661 186022 INFO nova.virt.libvirt.driver [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Deleting instance files /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a_del
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.662 186022 INFO nova.virt.libvirt.driver [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Deletion of /var/lib/nova/instances/f64de408-e6d1-4f7f-9f94-e20a4c83a87a_del complete
Jan 05 21:25:02 compute-0 podman[248978]: 2026-01-05 21:25:02.678346117 +0000 UTC m=+0.059313578 container remove 233717ab13ddd74f7a4eca20c3a8fa2832e22941efa44351559dfcb3517e1b01 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 21:25:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:02.686 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[02203144-aea0-4d12-bc12-14a306cb7323]: (4, ('Mon Jan  5 09:25:02 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49 (233717ab13ddd74f7a4eca20c3a8fa2832e22941efa44351559dfcb3517e1b01)\n233717ab13ddd74f7a4eca20c3a8fa2832e22941efa44351559dfcb3517e1b01\nMon Jan  5 09:25:02 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49 (233717ab13ddd74f7a4eca20c3a8fa2832e22941efa44351559dfcb3517e1b01)\n233717ab13ddd74f7a4eca20c3a8fa2832e22941efa44351559dfcb3517e1b01\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:25:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:02.688 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[5b4c45f5-c1f1-4166-9ded-0397ddf6ccf1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:25:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:02.689 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapb871481f-00, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.691 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:02 compute-0 kernel: tapb871481f-00: left promiscuous mode
Jan 05 21:25:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:02.711 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[077244a9-b82b-4b3e-ae44-7d8cab694c2e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.712 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.721 186022 INFO nova.compute.manager [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Took 0.40 seconds to destroy the instance on the hypervisor.
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.722 186022 DEBUG oslo.service.loopingcall [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.722 186022 DEBUG nova.compute.manager [-] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 21:25:02 compute-0 nova_compute[186018]: 2026-01-05 21:25:02.722 186022 DEBUG nova.network.neutron [-] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 21:25:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:02.728 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[240667af-d8fc-4652-b533-df4f4f8d3627]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:25:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:02.730 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[aa7f73e6-29c4-4134-82e0-f0471e80f2f9]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:25:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:02.744 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[e2b4440d-3e27-4643-a8cd-8cd60fdf0272]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 393136, 'reachable_time': 36969, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 248992, 'error': None, 'target': 'ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:25:02 compute-0 systemd[1]: run-netns-ovnmeta\x2db871481f\x2d0445\x2d42f2\x2d8b6a\x2d2e8572ae5b49.mount: Deactivated successfully.
Jan 05 21:25:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:02.756 108136 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-b871481f-0445-42f2-8b6a-2e8572ae5b49 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 05 21:25:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:02.757 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[6008b565-e48b-4136-ae15-2dd672202f4d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:25:03 compute-0 nova_compute[186018]: 2026-01-05 21:25:03.441 186022 DEBUG nova.compute.manager [req-66ef6866-2ff0-4aea-a0b8-ae556b1e14a1 req-390a6d7b-9f08-42a3-b22f-3f5dffcf3158 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Received event network-vif-unplugged-9f21c713-156d-4cef-99ef-70022fb8e58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:25:03 compute-0 nova_compute[186018]: 2026-01-05 21:25:03.442 186022 DEBUG oslo_concurrency.lockutils [req-66ef6866-2ff0-4aea-a0b8-ae556b1e14a1 req-390a6d7b-9f08-42a3-b22f-3f5dffcf3158 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:25:03 compute-0 nova_compute[186018]: 2026-01-05 21:25:03.443 186022 DEBUG oslo_concurrency.lockutils [req-66ef6866-2ff0-4aea-a0b8-ae556b1e14a1 req-390a6d7b-9f08-42a3-b22f-3f5dffcf3158 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:25:03 compute-0 nova_compute[186018]: 2026-01-05 21:25:03.444 186022 DEBUG oslo_concurrency.lockutils [req-66ef6866-2ff0-4aea-a0b8-ae556b1e14a1 req-390a6d7b-9f08-42a3-b22f-3f5dffcf3158 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:25:03 compute-0 nova_compute[186018]: 2026-01-05 21:25:03.445 186022 DEBUG nova.compute.manager [req-66ef6866-2ff0-4aea-a0b8-ae556b1e14a1 req-390a6d7b-9f08-42a3-b22f-3f5dffcf3158 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] No waiting events found dispatching network-vif-unplugged-9f21c713-156d-4cef-99ef-70022fb8e58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:25:03 compute-0 nova_compute[186018]: 2026-01-05 21:25:03.446 186022 DEBUG nova.compute.manager [req-66ef6866-2ff0-4aea-a0b8-ae556b1e14a1 req-390a6d7b-9f08-42a3-b22f-3f5dffcf3158 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Received event network-vif-unplugged-9f21c713-156d-4cef-99ef-70022fb8e58b for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 05 21:25:03 compute-0 nova_compute[186018]: 2026-01-05 21:25:03.769 186022 DEBUG nova.network.neutron [-] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:25:03 compute-0 nova_compute[186018]: 2026-01-05 21:25:03.787 186022 INFO nova.compute.manager [-] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Took 1.06 seconds to deallocate network for instance.
Jan 05 21:25:03 compute-0 nova_compute[186018]: 2026-01-05 21:25:03.840 186022 DEBUG oslo_concurrency.lockutils [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:25:03 compute-0 nova_compute[186018]: 2026-01-05 21:25:03.841 186022 DEBUG oslo_concurrency.lockutils [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:25:03 compute-0 nova_compute[186018]: 2026-01-05 21:25:03.921 186022 DEBUG nova.compute.provider_tree [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:25:03 compute-0 nova_compute[186018]: 2026-01-05 21:25:03.936 186022 DEBUG nova.scheduler.client.report [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:25:03 compute-0 nova_compute[186018]: 2026-01-05 21:25:03.960 186022 DEBUG oslo_concurrency.lockutils [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.119s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:25:03 compute-0 nova_compute[186018]: 2026-01-05 21:25:03.991 186022 INFO nova.scheduler.client.report [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Deleted allocations for instance f64de408-e6d1-4f7f-9f94-e20a4c83a87a
Jan 05 21:25:04 compute-0 nova_compute[186018]: 2026-01-05 21:25:04.070 186022 DEBUG oslo_concurrency.lockutils [None req-c1687909-d1a4-48d2-8807-3f62c6a8f865 41f377b42540490198f271301cf5fe90 704814115a61471f9b45484171f67b5f - - default default] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.751s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:25:05 compute-0 nova_compute[186018]: 2026-01-05 21:25:05.513 186022 DEBUG nova.compute.manager [req-d7d597cf-d8ce-427f-a03e-58fe46ca7530 req-d6450348-28d2-4829-ae9f-9eef0229be58 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Received event network-vif-deleted-9f21c713-156d-4cef-99ef-70022fb8e58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:25:05 compute-0 nova_compute[186018]: 2026-01-05 21:25:05.514 186022 DEBUG nova.compute.manager [req-d7d597cf-d8ce-427f-a03e-58fe46ca7530 req-d6450348-28d2-4829-ae9f-9eef0229be58 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Received event network-vif-plugged-9f21c713-156d-4cef-99ef-70022fb8e58b external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:25:05 compute-0 nova_compute[186018]: 2026-01-05 21:25:05.514 186022 DEBUG oslo_concurrency.lockutils [req-d7d597cf-d8ce-427f-a03e-58fe46ca7530 req-d6450348-28d2-4829-ae9f-9eef0229be58 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:25:05 compute-0 nova_compute[186018]: 2026-01-05 21:25:05.514 186022 DEBUG oslo_concurrency.lockutils [req-d7d597cf-d8ce-427f-a03e-58fe46ca7530 req-d6450348-28d2-4829-ae9f-9eef0229be58 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:25:05 compute-0 nova_compute[186018]: 2026-01-05 21:25:05.515 186022 DEBUG oslo_concurrency.lockutils [req-d7d597cf-d8ce-427f-a03e-58fe46ca7530 req-d6450348-28d2-4829-ae9f-9eef0229be58 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "f64de408-e6d1-4f7f-9f94-e20a4c83a87a-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:25:05 compute-0 nova_compute[186018]: 2026-01-05 21:25:05.516 186022 DEBUG nova.compute.manager [req-d7d597cf-d8ce-427f-a03e-58fe46ca7530 req-d6450348-28d2-4829-ae9f-9eef0229be58 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] No waiting events found dispatching network-vif-plugged-9f21c713-156d-4cef-99ef-70022fb8e58b pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:25:05 compute-0 nova_compute[186018]: 2026-01-05 21:25:05.516 186022 WARNING nova.compute.manager [req-d7d597cf-d8ce-427f-a03e-58fe46ca7530 req-d6450348-28d2-4829-ae9f-9eef0229be58 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Received unexpected event network-vif-plugged-9f21c713-156d-4cef-99ef-70022fb8e58b for instance with vm_state deleted and task_state None.
Jan 05 21:25:07 compute-0 nova_compute[186018]: 2026-01-05 21:25:07.521 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:07 compute-0 nova_compute[186018]: 2026-01-05 21:25:07.650 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:07 compute-0 podman[248994]: 2026-01-05 21:25:07.720399243 +0000 UTC m=+0.065312165 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 05 21:25:07 compute-0 podman[248995]: 2026-01-05 21:25:07.740440399 +0000 UTC m=+0.082726742 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.785 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.786 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.786 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.787 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c199dc0>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.797 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.798 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:25:07.799 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:25:12 compute-0 nova_compute[186018]: 2026-01-05 21:25:12.525 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:12 compute-0 nova_compute[186018]: 2026-01-05 21:25:12.652 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:13 compute-0 nova_compute[186018]: 2026-01-05 21:25:13.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:25:13 compute-0 nova_compute[186018]: 2026-01-05 21:25:13.460 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:25:13 compute-0 nova_compute[186018]: 2026-01-05 21:25:13.501 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 21:25:13 compute-0 nova_compute[186018]: 2026-01-05 21:25:13.502 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:25:13 compute-0 nova_compute[186018]: 2026-01-05 21:25:13.502 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:25:13 compute-0 podman[249037]: 2026-01-05 21:25:13.749886701 +0000 UTC m=+0.083466452 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:25:17 compute-0 nova_compute[186018]: 2026-01-05 21:25:17.529 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:17 compute-0 nova_compute[186018]: 2026-01-05 21:25:17.607 186022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767648302.6060941, f64de408-e6d1-4f7f-9f94-e20a4c83a87a => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:25:17 compute-0 nova_compute[186018]: 2026-01-05 21:25:17.608 186022 INFO nova.compute.manager [-] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] VM Stopped (Lifecycle Event)
Jan 05 21:25:17 compute-0 nova_compute[186018]: 2026-01-05 21:25:17.654 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:17 compute-0 nova_compute[186018]: 2026-01-05 21:25:17.801 186022 DEBUG nova.compute.manager [None req-2d08f2cc-5da8-4fc4-ad00-fafe60a17549 - - - - - -] [instance: f64de408-e6d1-4f7f-9f94-e20a4c83a87a] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:25:18 compute-0 nova_compute[186018]: 2026-01-05 21:25:18.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:25:18 compute-0 nova_compute[186018]: 2026-01-05 21:25:18.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:25:20 compute-0 nova_compute[186018]: 2026-01-05 21:25:20.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:25:20 compute-0 rsyslogd[237695]: imjournal: journal files changed, reloading...  [v8.2510.0-2.el9 try https://www.rsyslog.com/e/0 ]
Jan 05 21:25:21 compute-0 nova_compute[186018]: 2026-01-05 21:25:21.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:25:21 compute-0 nova_compute[186018]: 2026-01-05 21:25:21.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:25:21 compute-0 podman[249063]: 2026-01-05 21:25:21.736136636 +0000 UTC m=+0.084500028 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.license=GPLv2)
Jan 05 21:25:21 compute-0 podman[249062]: 2026-01-05 21:25:21.744932537 +0000 UTC m=+0.086129441 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, version=9.4, managed_by=edpm_ansible, release=1214.1726694543, config_id=kepler, release-0.7.12=, distribution-scope=public, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 05 21:25:22 compute-0 nova_compute[186018]: 2026-01-05 21:25:22.533 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:22 compute-0 nova_compute[186018]: 2026-01-05 21:25:22.658 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:22 compute-0 nova_compute[186018]: 2026-01-05 21:25:22.701 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:25:22 compute-0 nova_compute[186018]: 2026-01-05 21:25:22.701 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:25:22 compute-0 nova_compute[186018]: 2026-01-05 21:25:22.702 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:25:22 compute-0 nova_compute[186018]: 2026-01-05 21:25:22.702 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:25:23 compute-0 nova_compute[186018]: 2026-01-05 21:25:23.027 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:25:23 compute-0 nova_compute[186018]: 2026-01-05 21:25:23.028 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5358MB free_disk=72.41686248779297GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:25:23 compute-0 nova_compute[186018]: 2026-01-05 21:25:23.029 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:25:23 compute-0 nova_compute[186018]: 2026-01-05 21:25:23.029 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:25:24 compute-0 nova_compute[186018]: 2026-01-05 21:25:24.545 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:25:24 compute-0 nova_compute[186018]: 2026-01-05 21:25:24.546 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:25:24 compute-0 nova_compute[186018]: 2026-01-05 21:25:24.573 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:25:24 compute-0 nova_compute[186018]: 2026-01-05 21:25:24.808 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:25:25 compute-0 nova_compute[186018]: 2026-01-05 21:25:25.344 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:25:25 compute-0 nova_compute[186018]: 2026-01-05 21:25:25.345 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 2.316s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:25:25 compute-0 podman[249103]: 2026-01-05 21:25:25.717418745 +0000 UTC m=+0.067287247 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 21:25:26 compute-0 nova_compute[186018]: 2026-01-05 21:25:26.346 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:25:26 compute-0 nova_compute[186018]: 2026-01-05 21:25:26.346 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:25:27 compute-0 nova_compute[186018]: 2026-01-05 21:25:27.535 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:27 compute-0 nova_compute[186018]: 2026-01-05 21:25:27.661 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:29 compute-0 podman[202426]: time="2026-01-05T21:25:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:25:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:25:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:25:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:25:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3896 "" "Go-http-client/1.1"
Jan 05 21:25:31 compute-0 openstack_network_exporter[205720]: ERROR   21:25:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:25:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:25:31 compute-0 openstack_network_exporter[205720]: ERROR   21:25:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:25:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:25:32 compute-0 nova_compute[186018]: 2026-01-05 21:25:32.538 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:32 compute-0 nova_compute[186018]: 2026-01-05 21:25:32.662 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:32 compute-0 podman[249124]: 2026-01-05 21:25:32.804448494 +0000 UTC m=+0.137116139 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.6, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_id=openstack_network_exporter, name=ubi9-minimal, io.openshift.tags=minimal rhel9, architecture=x86_64, release=1755695350)
Jan 05 21:25:32 compute-0 podman[249123]: 2026-01-05 21:25:32.83480418 +0000 UTC m=+0.174428688 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 21:25:33 compute-0 ovn_controller[98229]: 2026-01-05T21:25:33Z|00070|memory_trim|INFO|Detected inactivity (last active 30002 ms ago): trimming memory
Jan 05 21:25:37 compute-0 nova_compute[186018]: 2026-01-05 21:25:37.542 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:37 compute-0 nova_compute[186018]: 2026-01-05 21:25:37.664 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:38 compute-0 podman[249167]: 2026-01-05 21:25:38.755710808 +0000 UTC m=+0.099217130 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:25:38 compute-0 podman[249168]: 2026-01-05 21:25:38.780541944 +0000 UTC m=+0.120063501 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:25:42 compute-0 nova_compute[186018]: 2026-01-05 21:25:42.544 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:42 compute-0 nova_compute[186018]: 2026-01-05 21:25:42.666 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:42.862 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:25:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:42.863 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:25:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:25:42.863 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:25:44 compute-0 podman[249208]: 2026-01-05 21:25:44.721357607 +0000 UTC m=+0.075439052 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:25:47 compute-0 nova_compute[186018]: 2026-01-05 21:25:47.545 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:47 compute-0 nova_compute[186018]: 2026-01-05 21:25:47.669 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:52 compute-0 nova_compute[186018]: 2026-01-05 21:25:52.548 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:52 compute-0 nova_compute[186018]: 2026-01-05 21:25:52.671 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:52 compute-0 podman[249234]: 2026-01-05 21:25:52.725521496 +0000 UTC m=+0.073498531 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.build-date=20251202, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:25:52 compute-0 podman[249233]: 2026-01-05 21:25:52.739028443 +0000 UTC m=+0.089629547 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release=1214.1726694543, managed_by=edpm_ansible, version=9.4, container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, build-date=2024-09-18T21:23:30, config_id=kepler, vcs-type=git)
Jan 05 21:25:56 compute-0 podman[249270]: 2026-01-05 21:25:56.727989385 +0000 UTC m=+0.075427373 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 05 21:25:57 compute-0 nova_compute[186018]: 2026-01-05 21:25:57.552 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:57 compute-0 nova_compute[186018]: 2026-01-05 21:25:57.675 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:25:58 compute-0 nova_compute[186018]: 2026-01-05 21:25:58.069 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_running_deleted_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:25:59 compute-0 podman[202426]: time="2026-01-05T21:25:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:25:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:25:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:25:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:25:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3893 "" "Go-http-client/1.1"
Jan 05 21:26:01 compute-0 openstack_network_exporter[205720]: ERROR   21:26:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:26:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:26:01 compute-0 openstack_network_exporter[205720]: ERROR   21:26:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:26:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:26:02 compute-0 nova_compute[186018]: 2026-01-05 21:26:02.553 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:02 compute-0 nova_compute[186018]: 2026-01-05 21:26:02.677 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:03 compute-0 podman[249290]: 2026-01-05 21:26:03.79382481 +0000 UTC m=+0.125770142 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, name=ubi9-minimal, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, config_id=openstack_network_exporter, version=9.6, vcs-type=git, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., release=1755695350, com.redhat.component=ubi9-minimal-container, distribution-scope=public)
Jan 05 21:26:03 compute-0 podman[249289]: 2026-01-05 21:26:03.82072656 +0000 UTC m=+0.160288523 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, container_name=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 21:26:07 compute-0 nova_compute[186018]: 2026-01-05 21:26:07.558 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:07 compute-0 nova_compute[186018]: 2026-01-05 21:26:07.680 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:09 compute-0 podman[249334]: 2026-01-05 21:26:09.72808659 +0000 UTC m=+0.071674554 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 21:26:09 compute-0 podman[249335]: 2026-01-05 21:26:09.752816203 +0000 UTC m=+0.090094800 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:26:12 compute-0 nova_compute[186018]: 2026-01-05 21:26:12.561 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:12 compute-0 nova_compute[186018]: 2026-01-05 21:26:12.684 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:14 compute-0 nova_compute[186018]: 2026-01-05 21:26:14.467 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:26:14 compute-0 nova_compute[186018]: 2026-01-05 21:26:14.467 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:26:15 compute-0 nova_compute[186018]: 2026-01-05 21:26:15.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:26:15 compute-0 nova_compute[186018]: 2026-01-05 21:26:15.463 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:26:15 compute-0 nova_compute[186018]: 2026-01-05 21:26:15.464 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:26:15 compute-0 nova_compute[186018]: 2026-01-05 21:26:15.489 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 21:26:15 compute-0 podman[249375]: 2026-01-05 21:26:15.756421432 +0000 UTC m=+0.105283090 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:26:17 compute-0 nova_compute[186018]: 2026-01-05 21:26:17.565 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:17 compute-0 nova_compute[186018]: 2026-01-05 21:26:17.689 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:18 compute-0 nova_compute[186018]: 2026-01-05 21:26:18.483 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:26:20 compute-0 nova_compute[186018]: 2026-01-05 21:26:20.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:26:21 compute-0 nova_compute[186018]: 2026-01-05 21:26:21.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:26:22 compute-0 nova_compute[186018]: 2026-01-05 21:26:22.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:26:22 compute-0 nova_compute[186018]: 2026-01-05 21:26:22.568 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:22 compute-0 nova_compute[186018]: 2026-01-05 21:26:22.693 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:23 compute-0 nova_compute[186018]: 2026-01-05 21:26:23.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:26:23 compute-0 nova_compute[186018]: 2026-01-05 21:26:23.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:26:23 compute-0 nova_compute[186018]: 2026-01-05 21:26:23.498 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:26:23 compute-0 nova_compute[186018]: 2026-01-05 21:26:23.499 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:26:23 compute-0 nova_compute[186018]: 2026-01-05 21:26:23.499 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:26:23 compute-0 nova_compute[186018]: 2026-01-05 21:26:23.500 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:26:23 compute-0 podman[249402]: 2026-01-05 21:26:23.744540237 +0000 UTC m=+0.078114083 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:26:23 compute-0 podman[249401]: 2026-01-05 21:26:23.764692649 +0000 UTC m=+0.101854270 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, vcs-type=git, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release=1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, name=ubi9, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, distribution-scope=public, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, container_name=kepler)
Jan 05 21:26:23 compute-0 nova_compute[186018]: 2026-01-05 21:26:23.825 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:26:23 compute-0 nova_compute[186018]: 2026-01-05 21:26:23.826 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5365MB free_disk=72.41713333129883GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:26:23 compute-0 nova_compute[186018]: 2026-01-05 21:26:23.827 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:26:23 compute-0 nova_compute[186018]: 2026-01-05 21:26:23.827 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:26:24 compute-0 nova_compute[186018]: 2026-01-05 21:26:24.276 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:26:24 compute-0 nova_compute[186018]: 2026-01-05 21:26:24.276 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:26:24 compute-0 nova_compute[186018]: 2026-01-05 21:26:24.372 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:26:24 compute-0 nova_compute[186018]: 2026-01-05 21:26:24.390 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:26:24 compute-0 nova_compute[186018]: 2026-01-05 21:26:24.392 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:26:24 compute-0 nova_compute[186018]: 2026-01-05 21:26:24.392 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.565s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:26:24 compute-0 nova_compute[186018]: 2026-01-05 21:26:24.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:26:24 compute-0 nova_compute[186018]: 2026-01-05 21:26:24.524 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:26:24 compute-0 nova_compute[186018]: 2026-01-05 21:26:24.525 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 21:26:27 compute-0 nova_compute[186018]: 2026-01-05 21:26:27.547 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:26:27 compute-0 nova_compute[186018]: 2026-01-05 21:26:27.569 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:27 compute-0 nova_compute[186018]: 2026-01-05 21:26:27.691 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:26:27 compute-0 nova_compute[186018]: 2026-01-05 21:26:27.695 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:27 compute-0 podman[249440]: 2026-01-05 21:26:27.739070798 +0000 UTC m=+0.097862085 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 05 21:26:29 compute-0 podman[202426]: time="2026-01-05T21:26:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:26:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:26:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:26:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:26:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3897 "" "Go-http-client/1.1"
Jan 05 21:26:30 compute-0 nova_compute[186018]: 2026-01-05 21:26:30.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:26:30 compute-0 nova_compute[186018]: 2026-01-05 21:26:30.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 21:26:30 compute-0 nova_compute[186018]: 2026-01-05 21:26:30.559 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 05 21:26:31 compute-0 openstack_network_exporter[205720]: ERROR   21:26:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:26:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:26:31 compute-0 openstack_network_exporter[205720]: ERROR   21:26:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:26:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:26:32 compute-0 nova_compute[186018]: 2026-01-05 21:26:32.571 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:32 compute-0 nova_compute[186018]: 2026-01-05 21:26:32.697 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:33 compute-0 nova_compute[186018]: 2026-01-05 21:26:33.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:26:34 compute-0 podman[249461]: 2026-01-05 21:26:34.761656461 +0000 UTC m=+0.104510090 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, vcs-type=git, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, distribution-scope=public, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64)
Jan 05 21:26:34 compute-0 podman[249460]: 2026-01-05 21:26:34.786356624 +0000 UTC m=+0.133021473 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_id=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:26:37 compute-0 nova_compute[186018]: 2026-01-05 21:26:37.574 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:37 compute-0 nova_compute[186018]: 2026-01-05 21:26:37.701 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:40 compute-0 podman[249506]: 2026-01-05 21:26:40.732466976 +0000 UTC m=+0.077362754 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent)
Jan 05 21:26:40 compute-0 podman[249507]: 2026-01-05 21:26:40.773257753 +0000 UTC m=+0.109488502 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:26:42 compute-0 nova_compute[186018]: 2026-01-05 21:26:42.577 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:42 compute-0 nova_compute[186018]: 2026-01-05 21:26:42.704 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:26:42.864 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:26:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:26:42.865 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:26:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:26:42.865 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:26:46 compute-0 podman[249549]: 2026-01-05 21:26:46.710983684 +0000 UTC m=+0.069146617 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:26:47 compute-0 nova_compute[186018]: 2026-01-05 21:26:47.579 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:47 compute-0 nova_compute[186018]: 2026-01-05 21:26:47.706 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:52 compute-0 nova_compute[186018]: 2026-01-05 21:26:52.582 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:52 compute-0 nova_compute[186018]: 2026-01-05 21:26:52.709 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:54 compute-0 podman[249573]: 2026-01-05 21:26:54.724372045 +0000 UTC m=+0.073396228 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, architecture=x86_64, vcs-type=git, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, name=ubi9, release=1214.1726694543, config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:26:54 compute-0 podman[249574]: 2026-01-05 21:26:54.766418425 +0000 UTC m=+0.104285084 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 05 21:26:57 compute-0 nova_compute[186018]: 2026-01-05 21:26:57.585 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:57 compute-0 nova_compute[186018]: 2026-01-05 21:26:57.713 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:26:58 compute-0 podman[249609]: 2026-01-05 21:26:58.72248763 +0000 UTC m=+0.073480551 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 21:26:59 compute-0 podman[202426]: time="2026-01-05T21:26:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:26:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:26:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:26:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:26:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3894 "" "Go-http-client/1.1"
Jan 05 21:27:01 compute-0 openstack_network_exporter[205720]: ERROR   21:27:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:27:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:27:01 compute-0 openstack_network_exporter[205720]: ERROR   21:27:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:27:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:27:02 compute-0 nova_compute[186018]: 2026-01-05 21:27:02.587 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:02 compute-0 nova_compute[186018]: 2026-01-05 21:27:02.717 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:05 compute-0 podman[249627]: 2026-01-05 21:27:05.781135325 +0000 UTC m=+0.132123539 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 05 21:27:05 compute-0 podman[249628]: 2026-01-05 21:27:05.786023094 +0000 UTC m=+0.118278663 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.buildah.version=1.33.7, managed_by=edpm_ansible, vcs-type=git, version=9.6, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 05 21:27:07 compute-0 nova_compute[186018]: 2026-01-05 21:27:07.591 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:07 compute-0 nova_compute[186018]: 2026-01-05 21:27:07.720 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.786 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.787 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.787 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.788 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.790 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.791 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.791 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.793 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.794 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.795 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.796 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f1641e53e60>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': [], 'network.outgoing.packets': [], 'disk.root.size': [], 'network.outgoing.bytes': [], 'network.outgoing.bytes.delta': [], 'network.outgoing.bytes.rate': [], 'network.incoming.packets.error': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.799 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.799 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.800 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.801 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.802 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:27:07.803 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:27:11 compute-0 podman[249675]: 2026-01-05 21:27:11.739525522 +0000 UTC m=+0.088150878 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:27:11 compute-0 podman[249674]: 2026-01-05 21:27:11.775199254 +0000 UTC m=+0.117019630 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 05 21:27:12 compute-0 nova_compute[186018]: 2026-01-05 21:27:12.595 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:12 compute-0 nova_compute[186018]: 2026-01-05 21:27:12.722 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:16 compute-0 nova_compute[186018]: 2026-01-05 21:27:16.475 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:27:16 compute-0 nova_compute[186018]: 2026-01-05 21:27:16.476 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:27:16 compute-0 nova_compute[186018]: 2026-01-05 21:27:16.476 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:27:16 compute-0 nova_compute[186018]: 2026-01-05 21:27:16.494 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 21:27:16 compute-0 nova_compute[186018]: 2026-01-05 21:27:16.495 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:27:16 compute-0 nova_compute[186018]: 2026-01-05 21:27:16.495 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:27:17 compute-0 nova_compute[186018]: 2026-01-05 21:27:17.596 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:17 compute-0 nova_compute[186018]: 2026-01-05 21:27:17.725 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:17 compute-0 podman[249713]: 2026-01-05 21:27:17.789737683 +0000 UTC m=+0.125405001 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:27:19 compute-0 nova_compute[186018]: 2026-01-05 21:27:19.478 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:27:21 compute-0 nova_compute[186018]: 2026-01-05 21:27:21.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:27:22 compute-0 nova_compute[186018]: 2026-01-05 21:27:22.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:27:22 compute-0 nova_compute[186018]: 2026-01-05 21:27:22.600 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:22 compute-0 nova_compute[186018]: 2026-01-05 21:27:22.730 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:23 compute-0 nova_compute[186018]: 2026-01-05 21:27:23.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:27:23 compute-0 nova_compute[186018]: 2026-01-05 21:27:23.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:27:25 compute-0 nova_compute[186018]: 2026-01-05 21:27:25.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:27:25 compute-0 nova_compute[186018]: 2026-01-05 21:27:25.491 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:27:25 compute-0 nova_compute[186018]: 2026-01-05 21:27:25.492 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:27:25 compute-0 nova_compute[186018]: 2026-01-05 21:27:25.492 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:27:25 compute-0 nova_compute[186018]: 2026-01-05 21:27:25.492 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:27:25 compute-0 podman[249739]: 2026-01-05 21:27:25.767461892 +0000 UTC m=+0.099370784 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 05 21:27:25 compute-0 podman[249738]: 2026-01-05 21:27:25.794731322 +0000 UTC m=+0.140204182 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, vendor=Red Hat, Inc., architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, config_id=kepler, name=ubi9, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, version=9.4, vcs-type=git, io.buildah.version=1.29.0, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, release=1214.1726694543, container_name=kepler, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 05 21:27:25 compute-0 nova_compute[186018]: 2026-01-05 21:27:25.915 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:27:25 compute-0 nova_compute[186018]: 2026-01-05 21:27:25.915 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5366MB free_disk=72.41713333129883GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:27:25 compute-0 nova_compute[186018]: 2026-01-05 21:27:25.916 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:27:25 compute-0 nova_compute[186018]: 2026-01-05 21:27:25.916 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:27:26 compute-0 nova_compute[186018]: 2026-01-05 21:27:26.107 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:27:26 compute-0 nova_compute[186018]: 2026-01-05 21:27:26.108 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:27:26 compute-0 nova_compute[186018]: 2026-01-05 21:27:26.123 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing inventories for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 21:27:26 compute-0 nova_compute[186018]: 2026-01-05 21:27:26.148 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating ProviderTree inventory for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 21:27:26 compute-0 nova_compute[186018]: 2026-01-05 21:27:26.149 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating inventory in ProviderTree for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 21:27:26 compute-0 nova_compute[186018]: 2026-01-05 21:27:26.175 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing aggregate associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 21:27:26 compute-0 nova_compute[186018]: 2026-01-05 21:27:26.228 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing trait associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_NODE,HW_CPU_X86_BMI,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 21:27:26 compute-0 nova_compute[186018]: 2026-01-05 21:27:26.262 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:27:26 compute-0 nova_compute[186018]: 2026-01-05 21:27:26.284 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:27:26 compute-0 nova_compute[186018]: 2026-01-05 21:27:26.286 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:27:26 compute-0 nova_compute[186018]: 2026-01-05 21:27:26.287 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.371s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:27:27 compute-0 nova_compute[186018]: 2026-01-05 21:27:27.603 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:27 compute-0 nova_compute[186018]: 2026-01-05 21:27:27.734 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:29 compute-0 nova_compute[186018]: 2026-01-05 21:27:29.290 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:27:29 compute-0 podman[202426]: time="2026-01-05T21:27:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:27:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:27:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:27:29 compute-0 podman[249775]: 2026-01-05 21:27:29.764995 +0000 UTC m=+0.107604042 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, io.buildah.version=1.41.4, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:27:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:27:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3896 "" "Go-http-client/1.1"
Jan 05 21:27:31 compute-0 openstack_network_exporter[205720]: ERROR   21:27:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:27:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:27:31 compute-0 openstack_network_exporter[205720]: ERROR   21:27:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:27:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:27:32 compute-0 nova_compute[186018]: 2026-01-05 21:27:32.606 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:32 compute-0 nova_compute[186018]: 2026-01-05 21:27:32.737 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:36 compute-0 podman[249795]: 2026-01-05 21:27:36.723203925 +0000 UTC m=+0.075345489 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, managed_by=edpm_ansible, release=1755695350, vendor=Red Hat, Inc., maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, config_id=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 05 21:27:36 compute-0 podman[249794]: 2026-01-05 21:27:36.759527814 +0000 UTC m=+0.116016283 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 21:27:37 compute-0 nova_compute[186018]: 2026-01-05 21:27:37.609 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:37 compute-0 nova_compute[186018]: 2026-01-05 21:27:37.740 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:42 compute-0 nova_compute[186018]: 2026-01-05 21:27:42.612 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:42 compute-0 podman[249841]: 2026-01-05 21:27:42.741452144 +0000 UTC m=+0.077783885 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:27:42 compute-0 nova_compute[186018]: 2026-01-05 21:27:42.743 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:42 compute-0 podman[249840]: 2026-01-05 21:27:42.77198932 +0000 UTC m=+0.110399026 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 21:27:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:27:42.865 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:27:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:27:42.865 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:27:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:27:42.865 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:27:47 compute-0 nova_compute[186018]: 2026-01-05 21:27:47.616 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:47 compute-0 nova_compute[186018]: 2026-01-05 21:27:47.746 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:48 compute-0 podman[249882]: 2026-01-05 21:27:48.720872626 +0000 UTC m=+0.073791149 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:27:52 compute-0 nova_compute[186018]: 2026-01-05 21:27:52.622 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:52 compute-0 nova_compute[186018]: 2026-01-05 21:27:52.749 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:56 compute-0 podman[249906]: 2026-01-05 21:27:56.755830987 +0000 UTC m=+0.097433373 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, name=ubi9, distribution-scope=public, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, io.buildah.version=1.29.0, vcs-type=git, build-date=2024-09-18T21:23:30, container_name=kepler)
Jan 05 21:27:56 compute-0 podman[249907]: 2026-01-05 21:27:56.757652585 +0000 UTC m=+0.090031967 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:27:57 compute-0 nova_compute[186018]: 2026-01-05 21:27:57.623 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:57 compute-0 nova_compute[186018]: 2026-01-05 21:27:57.752 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:27:59 compute-0 podman[202426]: time="2026-01-05T21:27:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:27:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:27:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:27:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:27:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3891 "" "Go-http-client/1.1"
Jan 05 21:28:00 compute-0 podman[249945]: 2026-01-05 21:28:00.77242045 +0000 UTC m=+0.128656697 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2)
Jan 05 21:28:01 compute-0 openstack_network_exporter[205720]: ERROR   21:28:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:28:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:28:01 compute-0 openstack_network_exporter[205720]: ERROR   21:28:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:28:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:28:02 compute-0 nova_compute[186018]: 2026-01-05 21:28:02.626 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:02 compute-0 nova_compute[186018]: 2026-01-05 21:28:02.756 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:07 compute-0 nova_compute[186018]: 2026-01-05 21:28:07.629 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:07 compute-0 nova_compute[186018]: 2026-01-05 21:28:07.758 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:07 compute-0 podman[249964]: 2026-01-05 21:28:07.779277207 +0000 UTC m=+0.115180071 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, release=1755695350, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, distribution-scope=public, vcs-type=git, io.openshift.expose-services=, maintainer=Red Hat, Inc., managed_by=edpm_ansible, container_name=openstack_network_exporter, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 05 21:28:07 compute-0 podman[249963]: 2026-01-05 21:28:07.808818837 +0000 UTC m=+0.158833274 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:28:12 compute-0 nova_compute[186018]: 2026-01-05 21:28:12.632 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:12 compute-0 nova_compute[186018]: 2026-01-05 21:28:12.761 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:13 compute-0 podman[250009]: 2026-01-05 21:28:13.738016904 +0000 UTC m=+0.090733416 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:28:13 compute-0 podman[250008]: 2026-01-05 21:28:13.744791933 +0000 UTC m=+0.101079260 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 05 21:28:16 compute-0 nova_compute[186018]: 2026-01-05 21:28:16.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:28:16 compute-0 nova_compute[186018]: 2026-01-05 21:28:16.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:28:16 compute-0 nova_compute[186018]: 2026-01-05 21:28:16.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:28:16 compute-0 nova_compute[186018]: 2026-01-05 21:28:16.484 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 21:28:16 compute-0 nova_compute[186018]: 2026-01-05 21:28:16.485 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:28:16 compute-0 nova_compute[186018]: 2026-01-05 21:28:16.486 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:28:17 compute-0 nova_compute[186018]: 2026-01-05 21:28:17.635 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:17 compute-0 nova_compute[186018]: 2026-01-05 21:28:17.764 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:19 compute-0 podman[250048]: 2026-01-05 21:28:19.722635473 +0000 UTC m=+0.065256783 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:28:20 compute-0 nova_compute[186018]: 2026-01-05 21:28:20.481 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:28:21 compute-0 nova_compute[186018]: 2026-01-05 21:28:21.459 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:28:22 compute-0 nova_compute[186018]: 2026-01-05 21:28:22.638 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:22 compute-0 nova_compute[186018]: 2026-01-05 21:28:22.767 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:24 compute-0 nova_compute[186018]: 2026-01-05 21:28:24.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:28:24 compute-0 nova_compute[186018]: 2026-01-05 21:28:24.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:28:24 compute-0 nova_compute[186018]: 2026-01-05 21:28:24.463 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:28:27 compute-0 nova_compute[186018]: 2026-01-05 21:28:27.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:28:27 compute-0 nova_compute[186018]: 2026-01-05 21:28:27.477 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:28:27 compute-0 nova_compute[186018]: 2026-01-05 21:28:27.508 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:28:27 compute-0 nova_compute[186018]: 2026-01-05 21:28:27.509 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:28:27 compute-0 nova_compute[186018]: 2026-01-05 21:28:27.509 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:28:27 compute-0 nova_compute[186018]: 2026-01-05 21:28:27.509 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:28:27 compute-0 nova_compute[186018]: 2026-01-05 21:28:27.640 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:27 compute-0 nova_compute[186018]: 2026-01-05 21:28:27.770 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:27 compute-0 podman[250072]: 2026-01-05 21:28:27.796647916 +0000 UTC m=+0.134556894 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, architecture=x86_64, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543)
Jan 05 21:28:27 compute-0 podman[250073]: 2026-01-05 21:28:27.807746679 +0000 UTC m=+0.141935258 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:28:27 compute-0 nova_compute[186018]: 2026-01-05 21:28:27.943 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:28:27 compute-0 nova_compute[186018]: 2026-01-05 21:28:27.945 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5356MB free_disk=72.41715240478516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:28:27 compute-0 nova_compute[186018]: 2026-01-05 21:28:27.945 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:28:27 compute-0 nova_compute[186018]: 2026-01-05 21:28:27.945 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:28:28 compute-0 nova_compute[186018]: 2026-01-05 21:28:28.026 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:28:28 compute-0 nova_compute[186018]: 2026-01-05 21:28:28.027 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:28:28 compute-0 nova_compute[186018]: 2026-01-05 21:28:28.057 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:28:28 compute-0 nova_compute[186018]: 2026-01-05 21:28:28.075 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:28:28 compute-0 nova_compute[186018]: 2026-01-05 21:28:28.077 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:28:28 compute-0 nova_compute[186018]: 2026-01-05 21:28:28.077 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:28:29 compute-0 podman[202426]: time="2026-01-05T21:28:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:28:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:28:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:28:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:28:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3899 "" "Go-http-client/1.1"
Jan 05 21:28:31 compute-0 nova_compute[186018]: 2026-01-05 21:28:31.061 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:28:31 compute-0 openstack_network_exporter[205720]: ERROR   21:28:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:28:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:28:31 compute-0 openstack_network_exporter[205720]: ERROR   21:28:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:28:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:28:31 compute-0 podman[250107]: 2026-01-05 21:28:31.779468835 +0000 UTC m=+0.116055725 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 05 21:28:32 compute-0 nova_compute[186018]: 2026-01-05 21:28:32.643 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:32 compute-0 nova_compute[186018]: 2026-01-05 21:28:32.773 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:37 compute-0 nova_compute[186018]: 2026-01-05 21:28:37.646 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:37 compute-0 nova_compute[186018]: 2026-01-05 21:28:37.775 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:38 compute-0 podman[250127]: 2026-01-05 21:28:38.766753325 +0000 UTC m=+0.100098424 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, maintainer=Red Hat, Inc., version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, build-date=2025-08-20T13:12:41, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 05 21:28:38 compute-0 podman[250126]: 2026-01-05 21:28:38.818364607 +0000 UTC m=+0.155505107 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller)
Jan 05 21:28:42 compute-0 nova_compute[186018]: 2026-01-05 21:28:42.648 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:42 compute-0 nova_compute[186018]: 2026-01-05 21:28:42.778 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:28:42.866 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:28:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:28:42.867 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:28:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:28:42.867 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:28:44 compute-0 podman[250172]: 2026-01-05 21:28:44.715409345 +0000 UTC m=+0.067234685 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:28:44 compute-0 podman[250171]: 2026-01-05 21:28:44.733370889 +0000 UTC m=+0.089701488 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 05 21:28:47 compute-0 nova_compute[186018]: 2026-01-05 21:28:47.651 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:47 compute-0 nova_compute[186018]: 2026-01-05 21:28:47.781 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:50 compute-0 podman[250210]: 2026-01-05 21:28:50.769012367 +0000 UTC m=+0.110118608 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:28:52 compute-0 nova_compute[186018]: 2026-01-05 21:28:52.653 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:52 compute-0 nova_compute[186018]: 2026-01-05 21:28:52.782 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:57 compute-0 nova_compute[186018]: 2026-01-05 21:28:57.656 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:57 compute-0 nova_compute[186018]: 2026-01-05 21:28:57.786 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:28:58 compute-0 podman[250233]: 2026-01-05 21:28:58.76842419 +0000 UTC m=+0.101227754 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, release-0.7.12=, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, release=1214.1726694543, config_id=kepler, name=ubi9, version=9.4, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-container, container_name=kepler, io.openshift.expose-services=, distribution-scope=public)
Jan 05 21:28:58 compute-0 podman[250234]: 2026-01-05 21:28:58.773327909 +0000 UTC m=+0.100410772 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 05 21:28:59 compute-0 podman[202426]: time="2026-01-05T21:28:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:28:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:28:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:28:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:28:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3897 "" "Go-http-client/1.1"
Jan 05 21:29:01 compute-0 openstack_network_exporter[205720]: ERROR   21:29:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:29:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:29:01 compute-0 openstack_network_exporter[205720]: ERROR   21:29:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:29:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:29:02 compute-0 nova_compute[186018]: 2026-01-05 21:29:02.658 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:02 compute-0 nova_compute[186018]: 2026-01-05 21:29:02.789 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:02 compute-0 podman[250273]: 2026-01-05 21:29:02.806760245 +0000 UTC m=+0.155049564 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0)
Jan 05 21:29:07 compute-0 nova_compute[186018]: 2026-01-05 21:29:07.662 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.787 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.788 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 nova_compute[186018]: 2026-01-05 21:29:07.792 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.792 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.ephemeral.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.797 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.797 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.798 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c1b5760>] with cache [{}], pollster history [{'disk.ephemeral.size': [], 'network.incoming.packets': [], 'network.outgoing.packets.drop': []}], and discovery cache [{'local_instances': []}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.798 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.800 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.root.size, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.800 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.801 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.801 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.801 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.801 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.802 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.802 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.packets.error, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.802 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.allocation, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.803 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.delta, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.803 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.803 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.804 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.packets.drop, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.804 14 DEBUG ceilometer.polling.manager [-] Skip pollster memory.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.804 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.804 14 DEBUG ceilometer.polling.manager [-] Skip pollster power.state, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.805 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.capacity, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.805 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.805 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.805 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.806 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.806 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.806 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.806 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.read.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.806 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.807 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.usage, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.807 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.bytes, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.807 14 DEBUG ceilometer.polling.manager [-] Skip pollster cpu, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.808 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.latency, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.808 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.808 14 DEBUG ceilometer.polling.manager [-] Skip pollster disk.device.write.requests, no  resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.809 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.810 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.811 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.812 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.813 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.814 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:29:07.814 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:29:09 compute-0 podman[250294]: 2026-01-05 21:29:09.769076548 +0000 UTC m=+0.114595476 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, config_id=openstack_network_exporter, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., release=1755695350, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, vendor=Red Hat, Inc., version=9.6, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7)
Jan 05 21:29:09 compute-0 podman[250293]: 2026-01-05 21:29:09.811391995 +0000 UTC m=+0.158747511 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, container_name=ovn_controller, org.label-schema.schema-version=1.0)
Jan 05 21:29:12 compute-0 nova_compute[186018]: 2026-01-05 21:29:12.664 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:12 compute-0 nova_compute[186018]: 2026-01-05 21:29:12.795 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:14 compute-0 podman[250341]: 2026-01-05 21:29:14.884749457 +0000 UTC m=+0.104070268 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent)
Jan 05 21:29:14 compute-0 podman[250340]: 2026-01-05 21:29:14.90530895 +0000 UTC m=+0.139505854 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:29:17 compute-0 nova_compute[186018]: 2026-01-05 21:29:17.666 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:17 compute-0 nova_compute[186018]: 2026-01-05 21:29:17.797 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:18 compute-0 nova_compute[186018]: 2026-01-05 21:29:18.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:29:18 compute-0 nova_compute[186018]: 2026-01-05 21:29:18.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:29:18 compute-0 nova_compute[186018]: 2026-01-05 21:29:18.463 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:29:18 compute-0 nova_compute[186018]: 2026-01-05 21:29:18.486 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 21:29:18 compute-0 nova_compute[186018]: 2026-01-05 21:29:18.487 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:29:18 compute-0 nova_compute[186018]: 2026-01-05 21:29:18.487 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:29:21 compute-0 podman[250384]: 2026-01-05 21:29:21.70616672 +0000 UTC m=+0.063358734 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:29:22 compute-0 nova_compute[186018]: 2026-01-05 21:29:22.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:29:22 compute-0 nova_compute[186018]: 2026-01-05 21:29:22.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:29:22 compute-0 nova_compute[186018]: 2026-01-05 21:29:22.668 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:22 compute-0 nova_compute[186018]: 2026-01-05 21:29:22.798 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:23 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:29:23.421 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=10, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=9) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:29:23 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:29:23.422 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 4 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:29:23 compute-0 nova_compute[186018]: 2026-01-05 21:29:23.426 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:24 compute-0 nova_compute[186018]: 2026-01-05 21:29:24.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:29:25 compute-0 nova_compute[186018]: 2026-01-05 21:29:25.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:29:26 compute-0 nova_compute[186018]: 2026-01-05 21:29:26.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:29:27 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:29:27.424 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '10'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:29:27 compute-0 nova_compute[186018]: 2026-01-05 21:29:27.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:29:27 compute-0 nova_compute[186018]: 2026-01-05 21:29:27.493 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:29:27 compute-0 nova_compute[186018]: 2026-01-05 21:29:27.494 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:29:27 compute-0 nova_compute[186018]: 2026-01-05 21:29:27.494 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:29:27 compute-0 nova_compute[186018]: 2026-01-05 21:29:27.494 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:29:27 compute-0 nova_compute[186018]: 2026-01-05 21:29:27.674 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:27 compute-0 nova_compute[186018]: 2026-01-05 21:29:27.799 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:27 compute-0 nova_compute[186018]: 2026-01-05 21:29:27.911 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:29:27 compute-0 nova_compute[186018]: 2026-01-05 21:29:27.913 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5368MB free_disk=72.41715240478516GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:29:27 compute-0 nova_compute[186018]: 2026-01-05 21:29:27.913 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:29:27 compute-0 nova_compute[186018]: 2026-01-05 21:29:27.913 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:29:28 compute-0 nova_compute[186018]: 2026-01-05 21:29:28.089 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 0 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:29:28 compute-0 nova_compute[186018]: 2026-01-05 21:29:28.089 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=512MB phys_disk=79GB used_disk=0GB total_vcpus=8 used_vcpus=0 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:29:28 compute-0 nova_compute[186018]: 2026-01-05 21:29:28.126 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:29:28 compute-0 nova_compute[186018]: 2026-01-05 21:29:28.144 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:29:28 compute-0 nova_compute[186018]: 2026-01-05 21:29:28.147 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:29:28 compute-0 nova_compute[186018]: 2026-01-05 21:29:28.148 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.234s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:29:29 compute-0 podman[250408]: 2026-01-05 21:29:29.735843101 +0000 UTC m=+0.085522669 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, distribution-scope=public, io.buildah.version=1.29.0, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, architecture=x86_64, container_name=kepler, release=1214.1726694543, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, managed_by=edpm_ansible)
Jan 05 21:29:29 compute-0 podman[202426]: time="2026-01-05T21:29:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:29:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:29:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:29:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:29:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3896 "" "Go-http-client/1.1"
Jan 05 21:29:29 compute-0 podman[250409]: 2026-01-05 21:29:29.762275859 +0000 UTC m=+0.107439427 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 05 21:29:31 compute-0 nova_compute[186018]: 2026-01-05 21:29:31.148 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:29:31 compute-0 openstack_network_exporter[205720]: ERROR   21:29:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:29:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:29:31 compute-0 openstack_network_exporter[205720]: ERROR   21:29:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:29:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:29:32 compute-0 nova_compute[186018]: 2026-01-05 21:29:32.676 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:32 compute-0 nova_compute[186018]: 2026-01-05 21:29:32.801 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:33 compute-0 podman[250445]: 2026-01-05 21:29:33.750681166 +0000 UTC m=+0.105726422 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.4, tcib_managed=true, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 05 21:29:37 compute-0 nova_compute[186018]: 2026-01-05 21:29:37.678 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:37 compute-0 nova_compute[186018]: 2026-01-05 21:29:37.803 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:40 compute-0 podman[250467]: 2026-01-05 21:29:40.80621221 +0000 UTC m=+0.138460906 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., vcs-type=git, distribution-scope=public, release=1755695350, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., version=9.6, architecture=x86_64, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:29:40 compute-0 podman[250466]: 2026-01-05 21:29:40.829845184 +0000 UTC m=+0.167192405 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:29:42 compute-0 nova_compute[186018]: 2026-01-05 21:29:42.681 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:42 compute-0 nova_compute[186018]: 2026-01-05 21:29:42.806 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:29:42.868 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:29:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:29:42.868 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:29:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:29:42.868 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:29:45 compute-0 podman[250509]: 2026-01-05 21:29:45.78408736 +0000 UTC m=+0.127225030 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:29:45 compute-0 podman[250510]: 2026-01-05 21:29:45.802715402 +0000 UTC m=+0.141361003 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:29:47 compute-0 nova_compute[186018]: 2026-01-05 21:29:47.684 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:47 compute-0 nova_compute[186018]: 2026-01-05 21:29:47.808 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:52 compute-0 nova_compute[186018]: 2026-01-05 21:29:52.686 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:52 compute-0 podman[250551]: 2026-01-05 21:29:52.715939957 +0000 UTC m=+0.073580782 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:29:52 compute-0 nova_compute[186018]: 2026-01-05 21:29:52.810 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:54 compute-0 ovn_controller[98229]: 2026-01-05T21:29:54Z|00071|memory_trim|INFO|Detected inactivity (last active 30014 ms ago): trimming memory
Jan 05 21:29:57 compute-0 nova_compute[186018]: 2026-01-05 21:29:57.689 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:57 compute-0 nova_compute[186018]: 2026-01-05 21:29:57.811 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:29:59 compute-0 podman[202426]: time="2026-01-05T21:29:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:29:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:29:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:29:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:29:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3899 "" "Go-http-client/1.1"
Jan 05 21:30:00 compute-0 podman[250575]: 2026-01-05 21:30:00.742447766 +0000 UTC m=+0.089163708 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, vcs-type=git, release-0.7.12=, release=1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, distribution-scope=public, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, config_id=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., container_name=kepler, io.openshift.expose-services=, io.openshift.tags=base rhel9, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4)
Jan 05 21:30:00 compute-0 podman[250576]: 2026-01-05 21:30:00.774191502 +0000 UTC m=+0.112587185 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:30:01 compute-0 openstack_network_exporter[205720]: ERROR   21:30:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:30:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:30:01 compute-0 openstack_network_exporter[205720]: ERROR   21:30:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:30:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:30:02 compute-0 nova_compute[186018]: 2026-01-05 21:30:02.692 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:02 compute-0 nova_compute[186018]: 2026-01-05 21:30:02.815 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:03 compute-0 nova_compute[186018]: 2026-01-05 21:30:03.581 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:04 compute-0 nova_compute[186018]: 2026-01-05 21:30:04.089 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:04 compute-0 podman[250613]: 2026-01-05 21:30:04.73339993 +0000 UTC m=+0.087852414 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 05 21:30:05 compute-0 nova_compute[186018]: 2026-01-05 21:30:05.162 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:07 compute-0 nova_compute[186018]: 2026-01-05 21:30:07.274 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:07 compute-0 nova_compute[186018]: 2026-01-05 21:30:07.696 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:07 compute-0 nova_compute[186018]: 2026-01-05 21:30:07.818 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:10 compute-0 nova_compute[186018]: 2026-01-05 21:30:10.888 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:11 compute-0 podman[250632]: 2026-01-05 21:30:11.800106192 +0000 UTC m=+0.131400670 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, distribution-scope=public, name=ubi9-minimal, version=9.6, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 05 21:30:11 compute-0 podman[250631]: 2026-01-05 21:30:11.815948319 +0000 UTC m=+0.162326385 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_controller, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_id=ovn_controller)
Jan 05 21:30:12 compute-0 nova_compute[186018]: 2026-01-05 21:30:12.611 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:12 compute-0 nova_compute[186018]: 2026-01-05 21:30:12.699 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:12 compute-0 nova_compute[186018]: 2026-01-05 21:30:12.820 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:13 compute-0 nova_compute[186018]: 2026-01-05 21:30:13.385 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:14 compute-0 nova_compute[186018]: 2026-01-05 21:30:14.548 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:14 compute-0 nova_compute[186018]: 2026-01-05 21:30:14.865 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:16 compute-0 podman[250676]: 2026-01-05 21:30:16.722343545 +0000 UTC m=+0.068012051 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:30:16 compute-0 podman[250677]: 2026-01-05 21:30:16.771263063 +0000 UTC m=+0.105903439 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:30:17 compute-0 nova_compute[186018]: 2026-01-05 21:30:17.702 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:17 compute-0 nova_compute[186018]: 2026-01-05 21:30:17.823 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:18 compute-0 nova_compute[186018]: 2026-01-05 21:30:18.229 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:19 compute-0 nova_compute[186018]: 2026-01-05 21:30:19.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:30:19 compute-0 nova_compute[186018]: 2026-01-05 21:30:19.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:30:19 compute-0 nova_compute[186018]: 2026-01-05 21:30:19.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:30:19 compute-0 nova_compute[186018]: 2026-01-05 21:30:19.482 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Didn't find any instances for network info cache update. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9944
Jan 05 21:30:20 compute-0 nova_compute[186018]: 2026-01-05 21:30:20.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:30:20 compute-0 nova_compute[186018]: 2026-01-05 21:30:20.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:30:22 compute-0 nova_compute[186018]: 2026-01-05 21:30:22.707 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:22 compute-0 nova_compute[186018]: 2026-01-05 21:30:22.826 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:23 compute-0 nova_compute[186018]: 2026-01-05 21:30:23.456 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:30:23 compute-0 nova_compute[186018]: 2026-01-05 21:30:23.690 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:23 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:23.688 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=11, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=10) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:30:23 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:23.690 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 0 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:30:23 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:23.691 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '11'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:23 compute-0 podman[250719]: 2026-01-05 21:30:23.729626593 +0000 UTC m=+0.082128093 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.100 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Acquiring lock "62f57876-af2d-4771-bffd-c87b7755cc5c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.100 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lock "62f57876-af2d-4771-bffd-c87b7755cc5c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.122 186022 DEBUG nova.compute.manager [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.458 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.459 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.475 186022 DEBUG nova.virt.hardware [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.475 186022 INFO nova.compute.claims [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Claim successful on node compute-0.ctlplane.example.com
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.578 186022 DEBUG nova.compute.provider_tree [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.592 186022 DEBUG nova.scheduler.client.report [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.612 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.153s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.613 186022 DEBUG nova.compute.manager [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.670 186022 DEBUG nova.compute.manager [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.671 186022 DEBUG nova.network.neutron [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.692 186022 INFO nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.716 186022 DEBUG nova.compute.manager [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.803 186022 DEBUG nova.compute.manager [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.805 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.806 186022 INFO nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Creating image(s)
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.808 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Acquiring lock "/var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.808 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lock "/var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.809 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lock "/var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.810 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Acquiring lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.811 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:24 compute-0 nova_compute[186018]: 2026-01-05 21:30:24.920 186022 DEBUG nova.policy [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '168ad639a6ed41c8bd954c434807ef6c', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'e0899289c7dd4631b4fa69150a914123', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.181 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Acquiring lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.182 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.199 186022 DEBUG nova.compute.manager [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.273 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.274 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.284 186022 DEBUG nova.virt.hardware [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.284 186022 INFO nova.compute.claims [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Claim successful on node compute-0.ctlplane.example.com
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.411 186022 DEBUG nova.compute.provider_tree [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.425 186022 DEBUG nova.scheduler.client.report [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.445 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.446 186022 DEBUG nova.compute.manager [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.499 186022 DEBUG nova.compute.manager [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.500 186022 DEBUG nova.network.neutron [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.518 186022 INFO nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.538 186022 DEBUG nova.compute.manager [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.621 186022 DEBUG nova.compute.manager [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.623 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.623 186022 INFO nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Creating image(s)
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.624 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Acquiring lock "/var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.624 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "/var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.625 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "/var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.625 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Acquiring lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:25 compute-0 nova_compute[186018]: 2026-01-05 21:30:25.868 186022 DEBUG nova.policy [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '69ccd256a35f415ca66bb59592f26ea6', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '010a085a147e46ac9d1df9d6d76b673a', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 05 21:30:26 compute-0 nova_compute[186018]: 2026-01-05 21:30:26.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:30:26 compute-0 nova_compute[186018]: 2026-01-05 21:30:26.465 186022 DEBUG nova.network.neutron [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Successfully created port: a6acaedc-5f9d-4aca-9e6b-c69623601aca _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 05 21:30:26 compute-0 nova_compute[186018]: 2026-01-05 21:30:26.552 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:26 compute-0 nova_compute[186018]: 2026-01-05 21:30:26.611 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe.part --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:26 compute-0 nova_compute[186018]: 2026-01-05 21:30:26.613 186022 DEBUG nova.virt.images [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] ebb2027f-05a6-465a-af75-b7da40a91332 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 05 21:30:26 compute-0 nova_compute[186018]: 2026-01-05 21:30:26.614 186022 DEBUG nova.privsep.utils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 05 21:30:26 compute-0 nova_compute[186018]: 2026-01-05 21:30:26.614 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe.part /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:26 compute-0 nova_compute[186018]: 2026-01-05 21:30:26.889 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe.part /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe.converted" returned: 0 in 0.275s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:26 compute-0 nova_compute[186018]: 2026-01-05 21:30:26.893 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:26 compute-0 nova_compute[186018]: 2026-01-05 21:30:26.951 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe.converted --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:26 compute-0 nova_compute[186018]: 2026-01-05 21:30:26.952 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.142s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:26 compute-0 nova_compute[186018]: 2026-01-05 21:30:26.965 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 1.340s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:26 compute-0 nova_compute[186018]: 2026-01-05 21:30:26.966 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:26 compute-0 nova_compute[186018]: 2026-01-05 21:30:26.984 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.000 186022 DEBUG oslo_concurrency.processutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.042 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.044 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Acquiring lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.044 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.064 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.081 186022 DEBUG oslo_concurrency.processutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.081s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.083 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Acquiring lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.121 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.122 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe,backing_fmt=raw /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.163 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe,backing_fmt=raw /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk 1073741824" returned: 0 in 0.041s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.164 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.165 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.179 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.097s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.191 186022 DEBUG oslo_concurrency.processutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.222 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.224 186022 DEBUG nova.virt.disk.api [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Checking if we can resize image /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.224 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.280 186022 DEBUG oslo_concurrency.processutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.281 186022 DEBUG oslo_concurrency.processutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe,backing_fmt=raw /var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.297 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.298 186022 DEBUG nova.virt.disk.api [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Cannot resize image /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.299 186022 DEBUG nova.objects.instance [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lazy-loading 'migration_context' on Instance uuid 62f57876-af2d-4771-bffd-c87b7755cc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.325 186022 DEBUG oslo_concurrency.processutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe,backing_fmt=raw /var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1/disk 1073741824" returned: 0 in 0.044s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.327 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.328 186022 DEBUG oslo_concurrency.processutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.344 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.345 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Ensure instance console log exists: /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.346 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.346 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.347 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.384 186022 DEBUG oslo_concurrency.processutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.385 186022 DEBUG nova.virt.disk.api [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Checking if we can resize image /var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.385 186022 DEBUG oslo_concurrency.processutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.440 186022 DEBUG oslo_concurrency.processutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1/disk --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.442 186022 DEBUG nova.virt.disk.api [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Cannot resize image /var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.443 186022 DEBUG nova.objects.instance [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lazy-loading 'migration_context' on Instance uuid 55d782b9-fb70-40e6-b501-16b69cd9a3e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.468 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.468 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Ensure instance console log exists: /var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.469 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.469 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.470 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.634 186022 DEBUG nova.network.neutron [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Successfully updated port: a6acaedc-5f9d-4aca-9e6b-c69623601aca _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.671 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Acquiring lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.671 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Acquired lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.672 186022 DEBUG nova.network.neutron [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.711 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.828 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:27 compute-0 nova_compute[186018]: 2026-01-05 21:30:27.883 186022 DEBUG nova.network.neutron [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Successfully created port: 9fb87af1-df86-49eb-922f-0cb70d0c6ce1 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.015 186022 DEBUG nova.network.neutron [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.075 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Acquiring lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.076 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.100 186022 DEBUG nova.compute.manager [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.177 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.178 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.194 186022 DEBUG nova.virt.hardware [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.195 186022 INFO nova.compute.claims [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Claim successful on node compute-0.ctlplane.example.com
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.351 186022 DEBUG nova.compute.provider_tree [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.372 186022 DEBUG nova.scheduler.client.report [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.399 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.221s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.400 186022 DEBUG nova.compute.manager [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.458 186022 DEBUG nova.compute.manager [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.460 186022 DEBUG nova.network.neutron [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.465 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.481 186022 INFO nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.516 186022 DEBUG nova.compute.manager [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.676 186022 DEBUG nova.compute.manager [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.678 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.678 186022 INFO nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Creating image(s)
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.679 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Acquiring lock "/var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.680 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "/var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.681 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "/var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.701 186022 DEBUG oslo_concurrency.processutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.763 186022 DEBUG oslo_concurrency.processutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.764 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Acquiring lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.765 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.776 186022 DEBUG oslo_concurrency.processutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.835 186022 DEBUG oslo_concurrency.processutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.836 186022 DEBUG oslo_concurrency.processutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe,backing_fmt=raw /var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.879 186022 DEBUG oslo_concurrency.processutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe,backing_fmt=raw /var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77/disk 1073741824" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.881 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.116s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.882 186022 DEBUG oslo_concurrency.processutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.943 186022 DEBUG nova.policy [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '8a7e00bbed09469a93a4c03517990c2b', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5ed80fade1274d8785b48dcf02608341', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.952 186022 DEBUG oslo_concurrency.processutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.953 186022 DEBUG nova.virt.disk.api [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Checking if we can resize image /var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 21:30:28 compute-0 nova_compute[186018]: 2026-01-05 21:30:28.954 186022 DEBUG oslo_concurrency.processutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.012 186022 DEBUG oslo_concurrency.processutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.014 186022 DEBUG nova.virt.disk.api [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Cannot resize image /var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.015 186022 DEBUG nova.objects.instance [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lazy-loading 'migration_context' on Instance uuid c5df5b36-6b5f-4e8d-b9db-aa96dc06de77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.037 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.038 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Ensure instance console log exists: /var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.038 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.039 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.039 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.137 186022 DEBUG nova.compute.manager [req-0c822f15-651d-4406-9fdb-c4ea1ce27cc9 req-d8535b1e-1387-4e70-a940-b37d2c2375b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Received event network-changed-a6acaedc-5f9d-4aca-9e6b-c69623601aca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.137 186022 DEBUG nova.compute.manager [req-0c822f15-651d-4406-9fdb-c4ea1ce27cc9 req-d8535b1e-1387-4e70-a940-b37d2c2375b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Refreshing instance network info cache due to event network-changed-a6acaedc-5f9d-4aca-9e6b-c69623601aca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.138 186022 DEBUG oslo_concurrency.lockutils [req-0c822f15-651d-4406-9fdb-c4ea1ce27cc9 req-d8535b1e-1387-4e70-a940-b37d2c2375b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.318 186022 DEBUG nova.network.neutron [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updating instance_info_cache with network_info: [{"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.365 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Releasing lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.366 186022 DEBUG nova.compute.manager [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Instance network_info: |[{"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.367 186022 DEBUG oslo_concurrency.lockutils [req-0c822f15-651d-4406-9fdb-c4ea1ce27cc9 req-d8535b1e-1387-4e70-a940-b37d2c2375b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.367 186022 DEBUG nova.network.neutron [req-0c822f15-651d-4406-9fdb-c4ea1ce27cc9 req-d8535b1e-1387-4e70-a940-b37d2c2375b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Refreshing network info cache for port a6acaedc-5f9d-4aca-9e6b-c69623601aca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.371 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Start _get_guest_xml network_info=[{"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:29:29Z,direct_url=<?>,disk_format='qcow2',id=ebb2027f-05a6-465a-af75-b7da40a91332,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:29:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'encrypted': False, 'encryption_format': None, 'image_id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.382 186022 WARNING nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.395 186022 DEBUG nova.virt.libvirt.host [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.396 186022 DEBUG nova.virt.libvirt.host [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.404 186022 DEBUG nova.virt.libvirt.host [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.405 186022 DEBUG nova.virt.libvirt.host [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.406 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.407 186022 DEBUG nova.virt.hardware [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T21:29:28Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ce1138a2-4b82-4664-8860-711a956c0882',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:29:29Z,direct_url=<?>,disk_format='qcow2',id=ebb2027f-05a6-465a-af75-b7da40a91332,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:29:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.409 186022 DEBUG nova.virt.hardware [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.410 186022 DEBUG nova.virt.hardware [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.411 186022 DEBUG nova.virt.hardware [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.411 186022 DEBUG nova.virt.hardware [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.412 186022 DEBUG nova.virt.hardware [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.413 186022 DEBUG nova.virt.hardware [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.414 186022 DEBUG nova.virt.hardware [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.415 186022 DEBUG nova.virt.hardware [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.415 186022 DEBUG nova.virt.hardware [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.416 186022 DEBUG nova.virt.hardware [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.425 186022 DEBUG nova.virt.libvirt.vif [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:30:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-306597775',display_name='tempest-AttachInterfacesUnderV243Test-server-306597775',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-306597775',id=6,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBzJsOxiuTULk8/q0hY1W9WkBcRLCga+d26CjT7DvGJc7rSPinPqBrq7UGO1qQH2+oqwCgFKhjm+tGKBlvAWtvFpz/HNteBTebjLeYyV7634k5yxnXUtNTOdKhlMYJvAng==',key_name='tempest-keypair-1556320060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e0899289c7dd4631b4fa69150a914123',ramdisk_id='',reservation_id='r-g8vfoexs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-280002567',owner_user_name='tempest-AttachInterfacesUnderV243Test-280002567-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:30:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='168ad639a6ed41c8bd954c434807ef6c',uuid=62f57876-af2d-4771-bffd-c87b7755cc5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.427 186022 DEBUG nova.network.os_vif_util [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Converting VIF {"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.428 186022 DEBUG nova.network.os_vif_util [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:0d:bf,bridge_name='br-int',has_traffic_filtering=True,id=a6acaedc-5f9d-4aca-9e6b-c69623601aca,network=Network(33bcb7a6-33e4-40b9-bab8-4665cf65dcc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6acaedc-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.429 186022 DEBUG nova.objects.instance [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lazy-loading 'pci_devices' on Instance uuid 62f57876-af2d-4771-bffd-c87b7755cc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.446 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] End _get_guest_xml xml=<domain type="kvm">
Jan 05 21:30:29 compute-0 nova_compute[186018]:   <uuid>62f57876-af2d-4771-bffd-c87b7755cc5c</uuid>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   <name>instance-00000006</name>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   <memory>131072</memory>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   <vcpu>1</vcpu>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   <metadata>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <nova:name>tempest-AttachInterfacesUnderV243Test-server-306597775</nova:name>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <nova:creationTime>2026-01-05 21:30:29</nova:creationTime>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <nova:flavor name="m1.nano">
Jan 05 21:30:29 compute-0 nova_compute[186018]:         <nova:memory>128</nova:memory>
Jan 05 21:30:29 compute-0 nova_compute[186018]:         <nova:disk>1</nova:disk>
Jan 05 21:30:29 compute-0 nova_compute[186018]:         <nova:swap>0</nova:swap>
Jan 05 21:30:29 compute-0 nova_compute[186018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 05 21:30:29 compute-0 nova_compute[186018]:         <nova:vcpus>1</nova:vcpus>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       </nova:flavor>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <nova:owner>
Jan 05 21:30:29 compute-0 nova_compute[186018]:         <nova:user uuid="168ad639a6ed41c8bd954c434807ef6c">tempest-AttachInterfacesUnderV243Test-280002567-project-member</nova:user>
Jan 05 21:30:29 compute-0 nova_compute[186018]:         <nova:project uuid="e0899289c7dd4631b4fa69150a914123">tempest-AttachInterfacesUnderV243Test-280002567</nova:project>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       </nova:owner>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <nova:root type="image" uuid="ebb2027f-05a6-465a-af75-b7da40a91332"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <nova:ports>
Jan 05 21:30:29 compute-0 nova_compute[186018]:         <nova:port uuid="a6acaedc-5f9d-4aca-9e6b-c69623601aca">
Jan 05 21:30:29 compute-0 nova_compute[186018]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:         </nova:port>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       </nova:ports>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     </nova:instance>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   </metadata>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   <sysinfo type="smbios">
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <system>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <entry name="manufacturer">RDO</entry>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <entry name="product">OpenStack Compute</entry>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <entry name="serial">62f57876-af2d-4771-bffd-c87b7755cc5c</entry>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <entry name="uuid">62f57876-af2d-4771-bffd-c87b7755cc5c</entry>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <entry name="family">Virtual Machine</entry>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     </system>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   </sysinfo>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   <os>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <boot dev="hd"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <smbios mode="sysinfo"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   </os>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   <features>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <acpi/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <apic/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <vmcoreinfo/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   </features>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   <clock offset="utc">
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <timer name="hpet" present="no"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   </clock>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   <cpu mode="host-model" match="exact">
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   </cpu>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   <devices>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <target dev="vda" bus="virtio"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <disk type="file" device="cdrom">
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk.config"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <target dev="sda" bus="sata"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <interface type="ethernet">
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <mac address="fa:16:3e:d3:0d:bf"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <mtu size="1442"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <target dev="tapa6acaedc-5f"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     </interface>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <serial type="pty">
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <log file="/var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/console.log" append="off"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     </serial>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <video>
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     </video>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <input type="tablet" bus="usb"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <rng model="virtio">
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <backend model="random">/dev/urandom</backend>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     </rng>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <controller type="usb" index="0"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     <memballoon model="virtio">
Jan 05 21:30:29 compute-0 nova_compute[186018]:       <stats period="10"/>
Jan 05 21:30:29 compute-0 nova_compute[186018]:     </memballoon>
Jan 05 21:30:29 compute-0 nova_compute[186018]:   </devices>
Jan 05 21:30:29 compute-0 nova_compute[186018]: </domain>
Jan 05 21:30:29 compute-0 nova_compute[186018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.447 186022 DEBUG nova.compute.manager [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Preparing to wait for external event network-vif-plugged-a6acaedc-5f9d-4aca-9e6b-c69623601aca prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.448 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Acquiring lock "62f57876-af2d-4771-bffd-c87b7755cc5c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.448 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lock "62f57876-af2d-4771-bffd-c87b7755cc5c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.448 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lock "62f57876-af2d-4771-bffd-c87b7755cc5c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.449 186022 DEBUG nova.virt.libvirt.vif [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:30:22Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-AttachInterfacesUnderV243Test-server-306597775',display_name='tempest-AttachInterfacesUnderV243Test-server-306597775',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-attachinterfacesunderv243test-server-306597775',id=6,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBBzJsOxiuTULk8/q0hY1W9WkBcRLCga+d26CjT7DvGJc7rSPinPqBrq7UGO1qQH2+oqwCgFKhjm+tGKBlvAWtvFpz/HNteBTebjLeYyV7634k5yxnXUtNTOdKhlMYJvAng==',key_name='tempest-keypair-1556320060',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='e0899289c7dd4631b4fa69150a914123',ramdisk_id='',reservation_id='r-g8vfoexs',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-AttachInterfacesUnderV243Test-280002567',owner_user_name='tempest-AttachInterfacesUnderV243Test-280002567-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:30:24Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='168ad639a6ed41c8bd954c434807ef6c',uuid=62f57876-af2d-4771-bffd-c87b7755cc5c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.449 186022 DEBUG nova.network.os_vif_util [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Converting VIF {"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.450 186022 DEBUG nova.network.os_vif_util [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:d3:0d:bf,bridge_name='br-int',has_traffic_filtering=True,id=a6acaedc-5f9d-4aca-9e6b-c69623601aca,network=Network(33bcb7a6-33e4-40b9-bab8-4665cf65dcc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6acaedc-5f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.450 186022 DEBUG os_vif [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:0d:bf,bridge_name='br-int',has_traffic_filtering=True,id=a6acaedc-5f9d-4aca-9e6b-c69623601aca,network=Network(33bcb7a6-33e4-40b9-bab8-4665cf65dcc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6acaedc-5f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.451 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.451 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.452 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.455 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.455 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapa6acaedc-5f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.455 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapa6acaedc-5f, col_values=(('external_ids', {'iface-id': 'a6acaedc-5f9d-4aca-9e6b-c69623601aca', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:d3:0d:bf', 'vm-uuid': '62f57876-af2d-4771-bffd-c87b7755cc5c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.457 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.459 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:30:29 compute-0 NetworkManager[56598]: <info>  [1767648629.4599] manager: (tapa6acaedc-5f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/34)
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.460 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.467 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.469 186022 INFO os_vif [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:d3:0d:bf,bridge_name='br-int',has_traffic_filtering=True,id=a6acaedc-5f9d-4aca-9e6b-c69623601aca,network=Network(33bcb7a6-33e4-40b9-bab8-4665cf65dcc5),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapa6acaedc-5f')
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.480 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.481 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.481 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.481 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.534 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.534 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.535 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] No VIF found with MAC fa:16:3e:d3:0d:bf, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.535 186022 INFO nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Using config drive
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.584 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.652 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.654 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.715 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:29 compute-0 nova_compute[186018]: 2026-01-05 21:30:29.717 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Periodic task is updating the host stat, it is trying to get disk instance-00000006, but disk file was removed by concurrent operations such as resize.: FileNotFoundError: [Errno 2] No such file or directory: '/var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk.config'
Jan 05 21:30:29 compute-0 podman[202426]: time="2026-01-05T21:30:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:30:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:30:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 27274 "" "Go-http-client/1.1"
Jan 05 21:30:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:30:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 3899 "" "Go-http-client/1.1"
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.006 186022 DEBUG nova.network.neutron [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Successfully updated port: 9fb87af1-df86-49eb-922f-0cb70d0c6ce1 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.021 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Acquiring lock "refresh_cache-55d782b9-fb70-40e6-b501-16b69cd9a3e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.021 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Acquired lock "refresh_cache-55d782b9-fb70-40e6-b501-16b69cd9a3e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.021 186022 DEBUG nova.network.neutron [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.099 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.100 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5350MB free_disk=72.38224411010742GB free_vcpus=8 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.101 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.101 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.180 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.181 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 55d782b9-fb70-40e6-b501-16b69cd9a3e1 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.181 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance c5df5b36-6b5f-4e8d-b9db-aa96dc06de77 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.181 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.182 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.266 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.298 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.323 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.324 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.223s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.335 186022 DEBUG nova.network.neutron [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.626 186022 INFO nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Creating config drive at /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk.config
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.638 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyv7lsfa3 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.774 186022 DEBUG oslo_concurrency.processutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpyv7lsfa3" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.822 186022 DEBUG nova.network.neutron [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Successfully created port: 7233cede-206c-45d2-9447-e0c1aafe27d2 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 05 21:30:30 compute-0 kernel: tapa6acaedc-5f: entered promiscuous mode
Jan 05 21:30:30 compute-0 NetworkManager[56598]: <info>  [1767648630.8765] manager: (tapa6acaedc-5f): new Tun device (/org/freedesktop/NetworkManager/Devices/35)
Jan 05 21:30:30 compute-0 ovn_controller[98229]: 2026-01-05T21:30:30Z|00072|binding|INFO|Claiming lport a6acaedc-5f9d-4aca-9e6b-c69623601aca for this chassis.
Jan 05 21:30:30 compute-0 ovn_controller[98229]: 2026-01-05T21:30:30Z|00073|binding|INFO|a6acaedc-5f9d-4aca-9e6b-c69623601aca: Claiming fa:16:3e:d3:0d:bf 10.100.0.6
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.877 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:30 compute-0 ovn_controller[98229]: 2026-01-05T21:30:30Z|00074|binding|INFO|Setting lport a6acaedc-5f9d-4aca-9e6b-c69623601aca ovn-installed in OVS
Jan 05 21:30:30 compute-0 ovn_controller[98229]: 2026-01-05T21:30:30Z|00075|binding|INFO|Setting lport a6acaedc-5f9d-4aca-9e6b-c69623601aca up in Southbound
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.896 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:30.892 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:d3:0d:bf 10.100.0.6'], port_security=['fa:16:3e:d3:0d:bf 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '62f57876-af2d-4771-bffd-c87b7755cc5c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-33bcb7a6-33e4-40b9-bab8-4665cf65dcc5', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'e0899289c7dd4631b4fa69150a914123', 'neutron:revision_number': '2', 'neutron:security_group_ids': '318f084d-2e05-4207-8337-538affe21e43', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=72f1c6a1-b3ed-4e18-8422-9fd39d977ddc, chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=a6acaedc-5f9d-4aca-9e6b-c69623601aca) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:30:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:30.894 107689 INFO neutron.agent.ovn.metadata.agent [-] Port a6acaedc-5f9d-4aca-9e6b-c69623601aca in datapath 33bcb7a6-33e4-40b9-bab8-4665cf65dcc5 bound to our chassis
Jan 05 21:30:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:30.896 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 33bcb7a6-33e4-40b9-bab8-4665cf65dcc5
Jan 05 21:30:30 compute-0 nova_compute[186018]: 2026-01-05 21:30:30.903 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:30.912 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[540b3185-052b-4e55-b5f7-66087b8ed9ea]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:30.913 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap33bcb7a6-31 in ovnmeta-33bcb7a6-33e4-40b9-bab8-4665cf65dcc5 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 05 21:30:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:30.914 240489 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap33bcb7a6-30 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 05 21:30:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:30.915 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[939a4374-92f1-4988-a7f9-b9bb124abb46]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:30.916 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[9b56bc5b-d3c1-4d51-b089-a40b37105a08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:30 compute-0 systemd-machined[157312]: New machine qemu-6-instance-00000006.
Jan 05 21:30:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:30.927 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[fdea3535-3178-4a10-833e-f048e5b154ce]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:30 compute-0 systemd[1]: Started Virtual Machine qemu-6-instance-00000006.
Jan 05 21:30:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:30.953 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[fafd61d0-ae78-47e7-b5fa-15960b70866b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:30 compute-0 systemd-udevd[250868]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:30:30 compute-0 NetworkManager[56598]: <info>  [1767648630.9743] device (tapa6acaedc-5f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 21:30:30 compute-0 NetworkManager[56598]: <info>  [1767648630.9767] device (tapa6acaedc-5f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 21:30:30 compute-0 podman[250824]: 2026-01-05 21:30:30.982599108 +0000 UTC m=+0.117053633 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, name=ubi9, io.buildah.version=1.29.0, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, io.openshift.expose-services=, maintainer=Red Hat, Inc., release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, release=1214.1726694543, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git)
Jan 05 21:30:30 compute-0 podman[250825]: 2026-01-05 21:30:30.987465146 +0000 UTC m=+0.118196943 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_ipmi)
Jan 05 21:30:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:30.987 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[4602c488-faca-42bd-9633-6e59edc0d756]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:30 compute-0 NetworkManager[56598]: <info>  [1767648630.9945] manager: (tap33bcb7a6-30): new Veth device (/org/freedesktop/NetworkManager/Devices/36)
Jan 05 21:30:30 compute-0 systemd-udevd[250875]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:30:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:30.993 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[fb980705-4b18-4040-bbe3-70d9a2b52462]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:31.027 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[98fb1d43-6e2d-4123-835e-22436079705e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:31.030 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[5575fdb4-480a-4992-9849-f73153e5303b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:31 compute-0 NetworkManager[56598]: <info>  [1767648631.0531] device (tap33bcb7a6-30): carrier: link connected
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:31.057 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[7911b3fb-8cc7-4f5f-ac48-941f9309cb36]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:31.075 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[de1a248d-7826-4056-91aa-60e92bc29be4]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33bcb7a6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:af:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537175, 'reachable_time': 41292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 250902, 'error': None, 'target': 'ovnmeta-33bcb7a6-33e4-40b9-bab8-4665cf65dcc5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:31.089 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[499dceee-7224-4a31-aa38-314195a565d3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe0f:af4f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537175, 'tstamp': 537175}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 250903, 'error': None, 'target': 'ovnmeta-33bcb7a6-33e4-40b9-bab8-4665cf65dcc5', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:31.105 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[5def16e4-36e6-4150-b18d-4db92e56f538]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap33bcb7a6-31'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:0f:af:4f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 21], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537175, 'reachable_time': 41292, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 250904, 'error': None, 'target': 'ovnmeta-33bcb7a6-33e4-40b9-bab8-4665cf65dcc5', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:31.133 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[33a5fb8e-a5e4-41cc-9823-265cdf202903]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:31.210 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[9bee635e-4fd5-4e9f-a39e-06b59a871f4a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:31.212 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap33bcb7a6-30, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:31.212 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:31.213 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap33bcb7a6-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:31 compute-0 kernel: tap33bcb7a6-30: entered promiscuous mode
Jan 05 21:30:31 compute-0 NetworkManager[56598]: <info>  [1767648631.2159] manager: (tap33bcb7a6-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/37)
Jan 05 21:30:31 compute-0 nova_compute[186018]: 2026-01-05 21:30:31.215 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:31.227 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap33bcb7a6-30, col_values=(('external_ids', {'iface-id': 'c3e05f88-97c2-469c-81f3-d52dff3918b2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:31 compute-0 ovn_controller[98229]: 2026-01-05T21:30:31Z|00076|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:30:31 compute-0 nova_compute[186018]: 2026-01-05 21:30:31.228 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:31 compute-0 nova_compute[186018]: 2026-01-05 21:30:31.245 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:31.246 107689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/33bcb7a6-33e4-40b9-bab8-4665cf65dcc5.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/33bcb7a6-33e4-40b9-bab8-4665cf65dcc5.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:31.247 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[729476ca-31c4-4c88-a2e3-2e20077fbc12]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:31.248 107689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: global
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     log         /dev/log local0 debug
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     log-tag     haproxy-metadata-proxy-33bcb7a6-33e4-40b9-bab8-4665cf65dcc5
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     user        root
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     group       root
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     maxconn     1024
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     pidfile     /var/lib/neutron/external/pids/33bcb7a6-33e4-40b9-bab8-4665cf65dcc5.pid.haproxy
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     daemon
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: defaults
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     log global
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     mode http
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     option httplog
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     option dontlognull
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     option http-server-close
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     option forwardfor
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     retries                 3
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     timeout http-request    30s
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     timeout connect         30s
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     timeout client          32s
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     timeout server          32s
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     timeout http-keep-alive 30s
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: listen listener
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     bind 169.254.169.254:80
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     server metadata /var/lib/neutron/metadata_proxy
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:     http-request add-header X-OVN-Network-ID 33bcb7a6-33e4-40b9-bab8-4665cf65dcc5
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 05 21:30:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:31.248 107689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-33bcb7a6-33e4-40b9-bab8-4665cf65dcc5', 'env', 'PROCESS_TAG=haproxy-33bcb7a6-33e4-40b9-bab8-4665cf65dcc5', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/33bcb7a6-33e4-40b9-bab8-4665cf65dcc5.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 05 21:30:31 compute-0 openstack_network_exporter[205720]: ERROR   21:30:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:30:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:30:31 compute-0 openstack_network_exporter[205720]: ERROR   21:30:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:30:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:30:31 compute-0 nova_compute[186018]: 2026-01-05 21:30:31.488 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648631.486963, 62f57876-af2d-4771-bffd-c87b7755cc5c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:30:31 compute-0 nova_compute[186018]: 2026-01-05 21:30:31.489 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] VM Started (Lifecycle Event)
Jan 05 21:30:31 compute-0 nova_compute[186018]: 2026-01-05 21:30:31.513 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:30:31 compute-0 nova_compute[186018]: 2026-01-05 21:30:31.524 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648631.4876919, 62f57876-af2d-4771-bffd-c87b7755cc5c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:30:31 compute-0 nova_compute[186018]: 2026-01-05 21:30:31.525 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] VM Paused (Lifecycle Event)
Jan 05 21:30:31 compute-0 nova_compute[186018]: 2026-01-05 21:30:31.550 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:30:31 compute-0 nova_compute[186018]: 2026-01-05 21:30:31.557 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:30:31 compute-0 nova_compute[186018]: 2026-01-05 21:30:31.583 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:30:31 compute-0 podman[250942]: 2026-01-05 21:30:31.709156797 +0000 UTC m=+0.078071486 container create 76e9e59625d7c932e8a9a7efa76f599dcd2658e55abb25b8276eb073b85a3121 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33bcb7a6-33e4-40b9-bab8-4665cf65dcc5, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:30:31 compute-0 systemd[1]: Started libpod-conmon-76e9e59625d7c932e8a9a7efa76f599dcd2658e55abb25b8276eb073b85a3121.scope.
Jan 05 21:30:31 compute-0 podman[250942]: 2026-01-05 21:30:31.66519389 +0000 UTC m=+0.034108569 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 21:30:31 compute-0 systemd[1]: Started libcrun container.
Jan 05 21:30:31 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6d2f4830a4113454c999b4ca41496f4d24a9bdc5a0fe3789d58cf3f8bbf8b574/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 21:30:31 compute-0 podman[250942]: 2026-01-05 21:30:31.820874158 +0000 UTC m=+0.189788867 container init 76e9e59625d7c932e8a9a7efa76f599dcd2658e55abb25b8276eb073b85a3121 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33bcb7a6-33e4-40b9-bab8-4665cf65dcc5, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 05 21:30:31 compute-0 podman[250942]: 2026-01-05 21:30:31.828404067 +0000 UTC m=+0.197318746 container start 76e9e59625d7c932e8a9a7efa76f599dcd2658e55abb25b8276eb073b85a3121 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-33bcb7a6-33e4-40b9-bab8-4665cf65dcc5, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 05 21:30:31 compute-0 neutron-haproxy-ovnmeta-33bcb7a6-33e4-40b9-bab8-4665cf65dcc5[250957]: [NOTICE]   (250961) : New worker (250963) forked
Jan 05 21:30:31 compute-0 neutron-haproxy-ovnmeta-33bcb7a6-33e4-40b9-bab8-4665cf65dcc5[250957]: [NOTICE]   (250961) : Loading success.
Jan 05 21:30:32 compute-0 nova_compute[186018]: 2026-01-05 21:30:32.217 186022 DEBUG nova.compute.manager [req-0ea733c9-b706-4669-8aed-34348bbbd0e3 req-e1fe783c-25d8-4d1f-b00e-2e07dae95af2 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Received event network-changed-9fb87af1-df86-49eb-922f-0cb70d0c6ce1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:30:32 compute-0 nova_compute[186018]: 2026-01-05 21:30:32.218 186022 DEBUG nova.compute.manager [req-0ea733c9-b706-4669-8aed-34348bbbd0e3 req-e1fe783c-25d8-4d1f-b00e-2e07dae95af2 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Refreshing instance network info cache due to event network-changed-9fb87af1-df86-49eb-922f-0cb70d0c6ce1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:30:32 compute-0 nova_compute[186018]: 2026-01-05 21:30:32.219 186022 DEBUG oslo_concurrency.lockutils [req-0ea733c9-b706-4669-8aed-34348bbbd0e3 req-e1fe783c-25d8-4d1f-b00e-2e07dae95af2 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-55d782b9-fb70-40e6-b501-16b69cd9a3e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:30:32 compute-0 nova_compute[186018]: 2026-01-05 21:30:32.326 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:30:32 compute-0 nova_compute[186018]: 2026-01-05 21:30:32.369 186022 DEBUG nova.network.neutron [req-0c822f15-651d-4406-9fdb-c4ea1ce27cc9 req-d8535b1e-1387-4e70-a940-b37d2c2375b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updated VIF entry in instance network info cache for port a6acaedc-5f9d-4aca-9e6b-c69623601aca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:30:32 compute-0 nova_compute[186018]: 2026-01-05 21:30:32.369 186022 DEBUG nova.network.neutron [req-0c822f15-651d-4406-9fdb-c4ea1ce27cc9 req-d8535b1e-1387-4e70-a940-b37d2c2375b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updating instance_info_cache with network_info: [{"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:30:32 compute-0 nova_compute[186018]: 2026-01-05 21:30:32.387 186022 DEBUG oslo_concurrency.lockutils [req-0c822f15-651d-4406-9fdb-c4ea1ce27cc9 req-d8535b1e-1387-4e70-a940-b37d2c2375b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:30:32 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 05 21:30:32 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 05 21:30:32 compute-0 nova_compute[186018]: 2026-01-05 21:30:32.455 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:30:32 compute-0 nova_compute[186018]: 2026-01-05 21:30:32.694 186022 DEBUG nova.network.neutron [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Successfully updated port: 7233cede-206c-45d2-9447-e0c1aafe27d2 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 21:30:32 compute-0 nova_compute[186018]: 2026-01-05 21:30:32.715 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:32 compute-0 nova_compute[186018]: 2026-01-05 21:30:32.814 186022 DEBUG nova.network.neutron [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Updating instance_info_cache with network_info: [{"id": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "address": "fa:16:3e:cc:77:98", "network": {"id": "af412d1c-9dfc-4972-9536-dd32101b5e7b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-260656285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010a085a147e46ac9d1df9d6d76b673a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb87af1-df", "ovs_interfaceid": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:30:32 compute-0 nova_compute[186018]: 2026-01-05 21:30:32.920 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Acquiring lock "refresh_cache-c5df5b36-6b5f-4e8d-b9db-aa96dc06de77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:30:32 compute-0 nova_compute[186018]: 2026-01-05 21:30:32.921 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Acquired lock "refresh_cache-c5df5b36-6b5f-4e8d-b9db-aa96dc06de77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:30:32 compute-0 nova_compute[186018]: 2026-01-05 21:30:32.921 186022 DEBUG nova.network.neutron [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.052 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Releasing lock "refresh_cache-55d782b9-fb70-40e6-b501-16b69cd9a3e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.053 186022 DEBUG nova.compute.manager [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Instance network_info: |[{"id": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "address": "fa:16:3e:cc:77:98", "network": {"id": "af412d1c-9dfc-4972-9536-dd32101b5e7b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-260656285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010a085a147e46ac9d1df9d6d76b673a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb87af1-df", "ovs_interfaceid": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.053 186022 DEBUG oslo_concurrency.lockutils [req-0ea733c9-b706-4669-8aed-34348bbbd0e3 req-e1fe783c-25d8-4d1f-b00e-2e07dae95af2 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-55d782b9-fb70-40e6-b501-16b69cd9a3e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.053 186022 DEBUG nova.network.neutron [req-0ea733c9-b706-4669-8aed-34348bbbd0e3 req-e1fe783c-25d8-4d1f-b00e-2e07dae95af2 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Refreshing network info cache for port 9fb87af1-df86-49eb-922f-0cb70d0c6ce1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.056 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Start _get_guest_xml network_info=[{"id": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "address": "fa:16:3e:cc:77:98", "network": {"id": "af412d1c-9dfc-4972-9536-dd32101b5e7b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-260656285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010a085a147e46ac9d1df9d6d76b673a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb87af1-df", "ovs_interfaceid": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:29:29Z,direct_url=<?>,disk_format='qcow2',id=ebb2027f-05a6-465a-af75-b7da40a91332,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:29:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'encrypted': False, 'encryption_format': None, 'image_id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.065 186022 WARNING nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.073 186022 DEBUG nova.virt.libvirt.host [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.073 186022 DEBUG nova.virt.libvirt.host [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.079 186022 DEBUG nova.virt.libvirt.host [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.080 186022 DEBUG nova.virt.libvirt.host [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.080 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.080 186022 DEBUG nova.virt.hardware [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T21:29:28Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ce1138a2-4b82-4664-8860-711a956c0882',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:29:29Z,direct_url=<?>,disk_format='qcow2',id=ebb2027f-05a6-465a-af75-b7da40a91332,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:29:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.081 186022 DEBUG nova.virt.hardware [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.081 186022 DEBUG nova.virt.hardware [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.081 186022 DEBUG nova.virt.hardware [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.081 186022 DEBUG nova.virt.hardware [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.082 186022 DEBUG nova.virt.hardware [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.082 186022 DEBUG nova.virt.hardware [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.082 186022 DEBUG nova.virt.hardware [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.082 186022 DEBUG nova.virt.hardware [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.082 186022 DEBUG nova.virt.hardware [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.082 186022 DEBUG nova.virt.hardware [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.086 186022 DEBUG nova.virt.libvirt.vif [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-807206790',display_name='tempest-ServersTestManualDisk-server-807206790',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-807206790',id=7,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGDeZJrrYdVqwRj/4jRj/LPny3LQ3PCtmjARFkvUU8fz8wG9dWaDkuKn4OY0av2cqn2g8GV20h8KSW13w9bOpoKWJn0Q7kZWAaYMkvjchcLREDNAOo4RbvVcKtgfZnGYkQ==',key_name='tempest-keypair-558169748',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='010a085a147e46ac9d1df9d6d76b673a',ramdisk_id='',reservation_id='r-0ahzr0en',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1483019970',owner_user_name='tempest-ServersTestManualDisk-1483019970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:30:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='69ccd256a35f415ca66bb59592f26ea6',uuid=55d782b9-fb70-40e6-b501-16b69cd9a3e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "address": "fa:16:3e:cc:77:98", "network": {"id": "af412d1c-9dfc-4972-9536-dd32101b5e7b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-260656285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010a085a147e46ac9d1df9d6d76b673a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb87af1-df", "ovs_interfaceid": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.086 186022 DEBUG nova.network.os_vif_util [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Converting VIF {"id": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "address": "fa:16:3e:cc:77:98", "network": {"id": "af412d1c-9dfc-4972-9536-dd32101b5e7b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-260656285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010a085a147e46ac9d1df9d6d76b673a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb87af1-df", "ovs_interfaceid": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.087 186022 DEBUG nova.network.os_vif_util [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:77:98,bridge_name='br-int',has_traffic_filtering=True,id=9fb87af1-df86-49eb-922f-0cb70d0c6ce1,network=Network(af412d1c-9dfc-4972-9536-dd32101b5e7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb87af1-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.088 186022 DEBUG nova.objects.instance [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lazy-loading 'pci_devices' on Instance uuid 55d782b9-fb70-40e6-b501-16b69cd9a3e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.113 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] End _get_guest_xml xml=<domain type="kvm">
Jan 05 21:30:33 compute-0 nova_compute[186018]:   <uuid>55d782b9-fb70-40e6-b501-16b69cd9a3e1</uuid>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   <name>instance-00000007</name>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   <memory>131072</memory>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   <vcpu>1</vcpu>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   <metadata>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <nova:name>tempest-ServersTestManualDisk-server-807206790</nova:name>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <nova:creationTime>2026-01-05 21:30:33</nova:creationTime>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <nova:flavor name="m1.nano">
Jan 05 21:30:33 compute-0 nova_compute[186018]:         <nova:memory>128</nova:memory>
Jan 05 21:30:33 compute-0 nova_compute[186018]:         <nova:disk>1</nova:disk>
Jan 05 21:30:33 compute-0 nova_compute[186018]:         <nova:swap>0</nova:swap>
Jan 05 21:30:33 compute-0 nova_compute[186018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 05 21:30:33 compute-0 nova_compute[186018]:         <nova:vcpus>1</nova:vcpus>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       </nova:flavor>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <nova:owner>
Jan 05 21:30:33 compute-0 nova_compute[186018]:         <nova:user uuid="69ccd256a35f415ca66bb59592f26ea6">tempest-ServersTestManualDisk-1483019970-project-member</nova:user>
Jan 05 21:30:33 compute-0 nova_compute[186018]:         <nova:project uuid="010a085a147e46ac9d1df9d6d76b673a">tempest-ServersTestManualDisk-1483019970</nova:project>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       </nova:owner>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <nova:root type="image" uuid="ebb2027f-05a6-465a-af75-b7da40a91332"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <nova:ports>
Jan 05 21:30:33 compute-0 nova_compute[186018]:         <nova:port uuid="9fb87af1-df86-49eb-922f-0cb70d0c6ce1">
Jan 05 21:30:33 compute-0 nova_compute[186018]:           <nova:ip type="fixed" address="10.100.0.9" ipVersion="4"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:         </nova:port>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       </nova:ports>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     </nova:instance>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   </metadata>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   <sysinfo type="smbios">
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <system>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <entry name="manufacturer">RDO</entry>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <entry name="product">OpenStack Compute</entry>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <entry name="serial">55d782b9-fb70-40e6-b501-16b69cd9a3e1</entry>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <entry name="uuid">55d782b9-fb70-40e6-b501-16b69cd9a3e1</entry>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <entry name="family">Virtual Machine</entry>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     </system>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   </sysinfo>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   <os>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <boot dev="hd"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <smbios mode="sysinfo"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   </os>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   <features>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <acpi/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <apic/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <vmcoreinfo/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   </features>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   <clock offset="utc">
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <timer name="hpet" present="no"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   </clock>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   <cpu mode="host-model" match="exact">
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   </cpu>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   <devices>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1/disk"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <target dev="vda" bus="virtio"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <disk type="file" device="cdrom">
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1/disk.config"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <target dev="sda" bus="sata"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <interface type="ethernet">
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <mac address="fa:16:3e:cc:77:98"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <mtu size="1442"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <target dev="tap9fb87af1-df"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     </interface>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <serial type="pty">
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <log file="/var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1/console.log" append="off"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     </serial>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <video>
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     </video>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <input type="tablet" bus="usb"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <rng model="virtio">
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <backend model="random">/dev/urandom</backend>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     </rng>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <controller type="usb" index="0"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     <memballoon model="virtio">
Jan 05 21:30:33 compute-0 nova_compute[186018]:       <stats period="10"/>
Jan 05 21:30:33 compute-0 nova_compute[186018]:     </memballoon>
Jan 05 21:30:33 compute-0 nova_compute[186018]:   </devices>
Jan 05 21:30:33 compute-0 nova_compute[186018]: </domain>
Jan 05 21:30:33 compute-0 nova_compute[186018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.114 186022 DEBUG nova.compute.manager [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Preparing to wait for external event network-vif-plugged-9fb87af1-df86-49eb-922f-0cb70d0c6ce1 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.114 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Acquiring lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.114 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.115 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.115 186022 DEBUG nova.virt.libvirt.vif [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-807206790',display_name='tempest-ServersTestManualDisk-server-807206790',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-807206790',id=7,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGDeZJrrYdVqwRj/4jRj/LPny3LQ3PCtmjARFkvUU8fz8wG9dWaDkuKn4OY0av2cqn2g8GV20h8KSW13w9bOpoKWJn0Q7kZWAaYMkvjchcLREDNAOo4RbvVcKtgfZnGYkQ==',key_name='tempest-keypair-558169748',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='010a085a147e46ac9d1df9d6d76b673a',ramdisk_id='',reservation_id='r-0ahzr0en',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestManualDisk-1483019970',owner_user_name='tempest-ServersTestManualDisk-1483019970-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:30:25Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='69ccd256a35f415ca66bb59592f26ea6',uuid=55d782b9-fb70-40e6-b501-16b69cd9a3e1,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "address": "fa:16:3e:cc:77:98", "network": {"id": "af412d1c-9dfc-4972-9536-dd32101b5e7b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-260656285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010a085a147e46ac9d1df9d6d76b673a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb87af1-df", "ovs_interfaceid": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.115 186022 DEBUG nova.network.os_vif_util [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Converting VIF {"id": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "address": "fa:16:3e:cc:77:98", "network": {"id": "af412d1c-9dfc-4972-9536-dd32101b5e7b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-260656285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010a085a147e46ac9d1df9d6d76b673a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb87af1-df", "ovs_interfaceid": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.116 186022 DEBUG nova.network.os_vif_util [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:77:98,bridge_name='br-int',has_traffic_filtering=True,id=9fb87af1-df86-49eb-922f-0cb70d0c6ce1,network=Network(af412d1c-9dfc-4972-9536-dd32101b5e7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb87af1-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.116 186022 DEBUG os_vif [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:77:98,bridge_name='br-int',has_traffic_filtering=True,id=9fb87af1-df86-49eb-922f-0cb70d0c6ce1,network=Network(af412d1c-9dfc-4972-9536-dd32101b5e7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb87af1-df') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.117 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.117 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.117 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.121 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.122 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9fb87af1-df, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.122 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap9fb87af1-df, col_values=(('external_ids', {'iface-id': '9fb87af1-df86-49eb-922f-0cb70d0c6ce1', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:cc:77:98', 'vm-uuid': '55d782b9-fb70-40e6-b501-16b69cd9a3e1'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.124 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:33 compute-0 NetworkManager[56598]: <info>  [1767648633.1263] manager: (tap9fb87af1-df): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/38)
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.128 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.135 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.136 186022 INFO os_vif [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:77:98,bridge_name='br-int',has_traffic_filtering=True,id=9fb87af1-df86-49eb-922f-0cb70d0c6ce1,network=Network(af412d1c-9dfc-4972-9536-dd32101b5e7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb87af1-df')
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.376 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.377 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.377 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] No VIF found with MAC fa:16:3e:cc:77:98, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.377 186022 INFO nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Using config drive
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.529 186022 DEBUG nova.network.neutron [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.825 186022 DEBUG nova.compute.manager [req-7668e82a-f666-4036-a06b-e17c53257c3b req-4dce8bf9-dd85-4a82-b3d0-270e148b8c9e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Received event network-vif-plugged-a6acaedc-5f9d-4aca-9e6b-c69623601aca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.825 186022 DEBUG oslo_concurrency.lockutils [req-7668e82a-f666-4036-a06b-e17c53257c3b req-4dce8bf9-dd85-4a82-b3d0-270e148b8c9e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "62f57876-af2d-4771-bffd-c87b7755cc5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.825 186022 DEBUG oslo_concurrency.lockutils [req-7668e82a-f666-4036-a06b-e17c53257c3b req-4dce8bf9-dd85-4a82-b3d0-270e148b8c9e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "62f57876-af2d-4771-bffd-c87b7755cc5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.826 186022 DEBUG oslo_concurrency.lockutils [req-7668e82a-f666-4036-a06b-e17c53257c3b req-4dce8bf9-dd85-4a82-b3d0-270e148b8c9e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "62f57876-af2d-4771-bffd-c87b7755cc5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.826 186022 DEBUG nova.compute.manager [req-7668e82a-f666-4036-a06b-e17c53257c3b req-4dce8bf9-dd85-4a82-b3d0-270e148b8c9e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Processing event network-vif-plugged-a6acaedc-5f9d-4aca-9e6b-c69623601aca _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.826 186022 DEBUG nova.compute.manager [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.832 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648633.8317807, 62f57876-af2d-4771-bffd-c87b7755cc5c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.832 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] VM Resumed (Lifecycle Event)
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.834 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.838 186022 INFO nova.virt.libvirt.driver [-] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Instance spawned successfully.
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.839 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.871 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.872 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.873 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.873 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.874 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.874 186022 DEBUG nova.virt.libvirt.driver [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.884 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.890 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.910 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.946 186022 INFO nova.compute.manager [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Took 9.14 seconds to spawn the instance on the hypervisor.
Jan 05 21:30:33 compute-0 nova_compute[186018]: 2026-01-05 21:30:33.946 186022 DEBUG nova.compute.manager [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:30:34 compute-0 nova_compute[186018]: 2026-01-05 21:30:34.017 186022 INFO nova.compute.manager [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Took 9.59 seconds to build instance.
Jan 05 21:30:34 compute-0 nova_compute[186018]: 2026-01-05 21:30:34.032 186022 DEBUG oslo_concurrency.lockutils [None req-be70c3e3-7b5b-4c8e-9106-5c7d16db8313 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lock "62f57876-af2d-4771-bffd-c87b7755cc5c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.932s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:35 compute-0 nova_compute[186018]: 2026-01-05 21:30:35.115 186022 INFO nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Creating config drive at /var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1/disk.config
Jan 05 21:30:35 compute-0 nova_compute[186018]: 2026-01-05 21:30:35.122 186022 DEBUG oslo_concurrency.processutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqxxvf9eo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:35 compute-0 nova_compute[186018]: 2026-01-05 21:30:35.249 186022 DEBUG oslo_concurrency.processutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqxxvf9eo" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:35 compute-0 kernel: tap9fb87af1-df: entered promiscuous mode
Jan 05 21:30:35 compute-0 NetworkManager[56598]: <info>  [1767648635.3448] manager: (tap9fb87af1-df): new Tun device (/org/freedesktop/NetworkManager/Devices/39)
Jan 05 21:30:35 compute-0 ovn_controller[98229]: 2026-01-05T21:30:35Z|00077|binding|INFO|Claiming lport 9fb87af1-df86-49eb-922f-0cb70d0c6ce1 for this chassis.
Jan 05 21:30:35 compute-0 ovn_controller[98229]: 2026-01-05T21:30:35Z|00078|binding|INFO|9fb87af1-df86-49eb-922f-0cb70d0c6ce1: Claiming fa:16:3e:cc:77:98 10.100.0.9
Jan 05 21:30:35 compute-0 nova_compute[186018]: 2026-01-05 21:30:35.356 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:35 compute-0 ovn_controller[98229]: 2026-01-05T21:30:35Z|00079|binding|INFO|Setting lport 9fb87af1-df86-49eb-922f-0cb70d0c6ce1 ovn-installed in OVS
Jan 05 21:30:35 compute-0 nova_compute[186018]: 2026-01-05 21:30:35.372 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:35 compute-0 nova_compute[186018]: 2026-01-05 21:30:35.377 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:35 compute-0 ovn_controller[98229]: 2026-01-05T21:30:35Z|00080|binding|INFO|Setting lport 9fb87af1-df86-49eb-922f-0cb70d0c6ce1 up in Southbound
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.381 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:77:98 10.100.0.9'], port_security=['fa:16:3e:cc:77:98 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '55d782b9-fb70-40e6-b501-16b69cd9a3e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af412d1c-9dfc-4972-9536-dd32101b5e7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '010a085a147e46ac9d1df9d6d76b673a', 'neutron:revision_number': '2', 'neutron:security_group_ids': '7e25d908-8ce0-4e4e-b658-e4bc93ff6fb9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=da5921cc-eaf7-43ac-becb-44ae4249a9aa, chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=9fb87af1-df86-49eb-922f-0cb70d0c6ce1) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.382 107689 INFO neutron.agent.ovn.metadata.agent [-] Port 9fb87af1-df86-49eb-922f-0cb70d0c6ce1 in datapath af412d1c-9dfc-4972-9536-dd32101b5e7b bound to our chassis
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.384 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network af412d1c-9dfc-4972-9536-dd32101b5e7b
Jan 05 21:30:35 compute-0 systemd-udevd[251021]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.397 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[3f09ff37-54ec-4a76-91e7-dbd9686766bb]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.398 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaf412d1c-91 in ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.400 240489 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaf412d1c-90 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 05 21:30:35 compute-0 systemd-machined[157312]: New machine qemu-7-instance-00000007.
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.401 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[a6435123-56d8-42e2-949e-aeeb1cd546bd]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.402 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[f7796ea4-5897-4470-98b8-c7f432192a76]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:35 compute-0 NetworkManager[56598]: <info>  [1767648635.4130] device (tap9fb87af1-df): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 21:30:35 compute-0 systemd[1]: Started Virtual Machine qemu-7-instance-00000007.
Jan 05 21:30:35 compute-0 NetworkManager[56598]: <info>  [1767648635.4165] device (tap9fb87af1-df): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.414 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[5d52f64c-4686-4053-89cd-fdc0909b9720]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.441 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[8c949620-2ed9-48f9-bebf-875d7d3c154a]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.468 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[ed4de9ef-7e64-4f70-b25c-34b497c17b34]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:35 compute-0 NetworkManager[56598]: <info>  [1767648635.4765] manager: (tapaf412d1c-90): new Veth device (/org/freedesktop/NetworkManager/Devices/40)
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.475 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[9959e6fc-738d-4d1d-9f14-3932b4138c61]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:35 compute-0 podman[251004]: 2026-01-05 21:30:35.496734477 +0000 UTC m=+0.164447011 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251224, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.506 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[daf51f5a-2b32-40d1-8947-413193f7abfc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.509 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[4984f2d0-93fb-4f2d-81bc-eaea9cb5d993]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:35 compute-0 NetworkManager[56598]: <info>  [1767648635.5328] device (tapaf412d1c-90): carrier: link connected
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.538 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[0e57f2e5-b159-402b-9b4e-1e91b9f8e27b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.554 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[3ab1797a-60b0-4f36-ae12-45343f4b11ec]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaf412d1c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:1c:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537623, 'reachable_time': 44833, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251063, 'error': None, 'target': 'ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.569 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[a78fc63f-3068-42d5-8fe2-7ff0d4baeb81]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:feda:1c12'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537623, 'tstamp': 537623}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251064, 'error': None, 'target': 'ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.584 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[c62e4bb6-38db-46c9-b7b6-620302156a46]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaf412d1c-91'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:da:1c:12'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 23], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537623, 'reachable_time': 44833, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251065, 'error': None, 'target': 'ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.614 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[dee02599-9d8b-44d2-894e-bdb62000baa1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.681 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[7fdd4e54-c8c5-4bfa-829d-93ed39bb71fb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.684 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf412d1c-90, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.685 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.685 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaf412d1c-90, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:35 compute-0 nova_compute[186018]: 2026-01-05 21:30:35.687 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:35 compute-0 NetworkManager[56598]: <info>  [1767648635.6887] manager: (tapaf412d1c-90): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/41)
Jan 05 21:30:35 compute-0 kernel: tapaf412d1c-90: entered promiscuous mode
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.695 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaf412d1c-90, col_values=(('external_ids', {'iface-id': '955504bf-4228-404f-a9f1-7ce937b5bf40'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:35 compute-0 nova_compute[186018]: 2026-01-05 21:30:35.695 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:35 compute-0 nova_compute[186018]: 2026-01-05 21:30:35.697 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:35 compute-0 nova_compute[186018]: 2026-01-05 21:30:35.698 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:35 compute-0 ovn_controller[98229]: 2026-01-05T21:30:35Z|00081|binding|INFO|Releasing lport 955504bf-4228-404f-a9f1-7ce937b5bf40 from this chassis (sb_readonly=0)
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.699 107689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/af412d1c-9dfc-4972-9536-dd32101b5e7b.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/af412d1c-9dfc-4972-9536-dd32101b5e7b.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.700 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[ac442684-7196-40b8-8910-6bb2423b1b22]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.701 107689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: global
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     log         /dev/log local0 debug
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     log-tag     haproxy-metadata-proxy-af412d1c-9dfc-4972-9536-dd32101b5e7b
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     user        root
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     group       root
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     maxconn     1024
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     pidfile     /var/lib/neutron/external/pids/af412d1c-9dfc-4972-9536-dd32101b5e7b.pid.haproxy
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     daemon
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: defaults
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     log global
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     mode http
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     option httplog
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     option dontlognull
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     option http-server-close
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     option forwardfor
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     retries                 3
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     timeout http-request    30s
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     timeout connect         30s
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     timeout client          32s
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     timeout server          32s
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     timeout http-keep-alive 30s
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: listen listener
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     bind 169.254.169.254:80
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     server metadata /var/lib/neutron/metadata_proxy
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:     http-request add-header X-OVN-Network-ID af412d1c-9dfc-4972-9536-dd32101b5e7b
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 05 21:30:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:35.701 107689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b', 'env', 'PROCESS_TAG=haproxy-af412d1c-9dfc-4972-9536-dd32101b5e7b', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/af412d1c-9dfc-4972-9536-dd32101b5e7b.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 05 21:30:35 compute-0 nova_compute[186018]: 2026-01-05 21:30:35.710 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.014 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648636.0136864, 55d782b9-fb70-40e6-b501-16b69cd9a3e1 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.016 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] VM Started (Lifecycle Event)
Jan 05 21:30:36 compute-0 podman[251101]: 2026-01-05 21:30:36.122771818 +0000 UTC m=+0.043045624 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 21:30:36 compute-0 podman[251101]: 2026-01-05 21:30:36.224697112 +0000 UTC m=+0.144970888 container create acd4e7e0ab8f14dca24712cdf21519dd5de920e010e268320a537e66c1837f72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2)
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.385 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.394 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648636.0138388, 55d782b9-fb70-40e6-b501-16b69cd9a3e1 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.395 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] VM Paused (Lifecycle Event)
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.444 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.452 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:30:36 compute-0 systemd[1]: Started libpod-conmon-acd4e7e0ab8f14dca24712cdf21519dd5de920e010e268320a537e66c1837f72.scope.
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.474 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:30:36 compute-0 systemd[1]: Started libcrun container.
Jan 05 21:30:36 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/a8dbaafbaeae859c381fac8997a0f49c3a44ab14eaccdcb0402a1b2cf41b682b/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 21:30:36 compute-0 podman[251101]: 2026-01-05 21:30:36.528732136 +0000 UTC m=+0.449005982 container init acd4e7e0ab8f14dca24712cdf21519dd5de920e010e268320a537e66c1837f72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 05 21:30:36 compute-0 podman[251101]: 2026-01-05 21:30:36.539684665 +0000 UTC m=+0.459958441 container start acd4e7e0ab8f14dca24712cdf21519dd5de920e010e268320a537e66c1837f72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:30:36 compute-0 neutron-haproxy-ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b[251116]: [NOTICE]   (251120) : New worker (251122) forked
Jan 05 21:30:36 compute-0 neutron-haproxy-ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b[251116]: [NOTICE]   (251120) : Loading success.
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.566 186022 DEBUG nova.network.neutron [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Updating instance_info_cache with network_info: [{"id": "7233cede-206c-45d2-9447-e0c1aafe27d2", "address": "fa:16:3e:4e:50:51", "network": {"id": "76ad42c4-a28f-4528-9090-217c5e2d84c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1037725494-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ed80fade1274d8785b48dcf02608341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7233cede-20", "ovs_interfaceid": "7233cede-206c-45d2-9447-e0c1aafe27d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.591 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Releasing lock "refresh_cache-c5df5b36-6b5f-4e8d-b9db-aa96dc06de77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.592 186022 DEBUG nova.compute.manager [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Instance network_info: |[{"id": "7233cede-206c-45d2-9447-e0c1aafe27d2", "address": "fa:16:3e:4e:50:51", "network": {"id": "76ad42c4-a28f-4528-9090-217c5e2d84c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1037725494-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ed80fade1274d8785b48dcf02608341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7233cede-20", "ovs_interfaceid": "7233cede-206c-45d2-9447-e0c1aafe27d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.596 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Start _get_guest_xml network_info=[{"id": "7233cede-206c-45d2-9447-e0c1aafe27d2", "address": "fa:16:3e:4e:50:51", "network": {"id": "76ad42c4-a28f-4528-9090-217c5e2d84c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1037725494-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ed80fade1274d8785b48dcf02608341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7233cede-20", "ovs_interfaceid": "7233cede-206c-45d2-9447-e0c1aafe27d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:29:29Z,direct_url=<?>,disk_format='qcow2',id=ebb2027f-05a6-465a-af75-b7da40a91332,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:29:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'encrypted': False, 'encryption_format': None, 'image_id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.604 186022 WARNING nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.614 186022 DEBUG nova.virt.libvirt.host [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.615 186022 DEBUG nova.virt.libvirt.host [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.620 186022 DEBUG nova.virt.libvirt.host [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.621 186022 DEBUG nova.virt.libvirt.host [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.622 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.622 186022 DEBUG nova.virt.hardware [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T21:29:28Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ce1138a2-4b82-4664-8860-711a956c0882',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:29:29Z,direct_url=<?>,disk_format='qcow2',id=ebb2027f-05a6-465a-af75-b7da40a91332,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:29:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.623 186022 DEBUG nova.virt.hardware [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.623 186022 DEBUG nova.virt.hardware [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.624 186022 DEBUG nova.virt.hardware [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.624 186022 DEBUG nova.virt.hardware [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.625 186022 DEBUG nova.virt.hardware [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.625 186022 DEBUG nova.virt.hardware [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.626 186022 DEBUG nova.virt.hardware [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.626 186022 DEBUG nova.virt.hardware [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.627 186022 DEBUG nova.virt.hardware [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.627 186022 DEBUG nova.virt.hardware [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.632 186022 DEBUG nova.virt.libvirt.vif [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:30:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1830148341',display_name='tempest-ServersTestJSON-server-1830148341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1830148341',id=8,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBILq6JIC25+Sou0Rf1t2/KsITA61NIRv/wVHxX64QYj7AildhzF08Zsxs6//dPLfYO2um7ZJdhhA6xnODFC5CLETwsZMkQPybWjkpb+sCA87oTzjVqI08yeHCNtavr3M4Q==',key_name='tempest-keypair-1161762285',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ed80fade1274d8785b48dcf02608341',ramdisk_id='',reservation_id='r-1nfa5kac',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-183434633',owner_user_name='tempest-ServersTestJSON-183434633-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:30:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8a7e00bbed09469a93a4c03517990c2b',uuid=c5df5b36-6b5f-4e8d-b9db-aa96dc06de77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7233cede-206c-45d2-9447-e0c1aafe27d2", "address": "fa:16:3e:4e:50:51", "network": {"id": "76ad42c4-a28f-4528-9090-217c5e2d84c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1037725494-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ed80fade1274d8785b48dcf02608341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7233cede-20", "ovs_interfaceid": "7233cede-206c-45d2-9447-e0c1aafe27d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.632 186022 DEBUG nova.network.os_vif_util [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Converting VIF {"id": "7233cede-206c-45d2-9447-e0c1aafe27d2", "address": "fa:16:3e:4e:50:51", "network": {"id": "76ad42c4-a28f-4528-9090-217c5e2d84c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1037725494-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ed80fade1274d8785b48dcf02608341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7233cede-20", "ovs_interfaceid": "7233cede-206c-45d2-9447-e0c1aafe27d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.633 186022 DEBUG nova.network.os_vif_util [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:50:51,bridge_name='br-int',has_traffic_filtering=True,id=7233cede-206c-45d2-9447-e0c1aafe27d2,network=Network(76ad42c4-a28f-4528-9090-217c5e2d84c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7233cede-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.635 186022 DEBUG nova.objects.instance [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lazy-loading 'pci_devices' on Instance uuid c5df5b36-6b5f-4e8d-b9db-aa96dc06de77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.654 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] End _get_guest_xml xml=<domain type="kvm">
Jan 05 21:30:36 compute-0 nova_compute[186018]:   <uuid>c5df5b36-6b5f-4e8d-b9db-aa96dc06de77</uuid>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   <name>instance-00000008</name>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   <memory>131072</memory>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   <vcpu>1</vcpu>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   <metadata>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <nova:name>tempest-ServersTestJSON-server-1830148341</nova:name>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <nova:creationTime>2026-01-05 21:30:36</nova:creationTime>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <nova:flavor name="m1.nano">
Jan 05 21:30:36 compute-0 nova_compute[186018]:         <nova:memory>128</nova:memory>
Jan 05 21:30:36 compute-0 nova_compute[186018]:         <nova:disk>1</nova:disk>
Jan 05 21:30:36 compute-0 nova_compute[186018]:         <nova:swap>0</nova:swap>
Jan 05 21:30:36 compute-0 nova_compute[186018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 05 21:30:36 compute-0 nova_compute[186018]:         <nova:vcpus>1</nova:vcpus>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       </nova:flavor>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <nova:owner>
Jan 05 21:30:36 compute-0 nova_compute[186018]:         <nova:user uuid="8a7e00bbed09469a93a4c03517990c2b">tempest-ServersTestJSON-183434633-project-member</nova:user>
Jan 05 21:30:36 compute-0 nova_compute[186018]:         <nova:project uuid="5ed80fade1274d8785b48dcf02608341">tempest-ServersTestJSON-183434633</nova:project>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       </nova:owner>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <nova:root type="image" uuid="ebb2027f-05a6-465a-af75-b7da40a91332"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <nova:ports>
Jan 05 21:30:36 compute-0 nova_compute[186018]:         <nova:port uuid="7233cede-206c-45d2-9447-e0c1aafe27d2">
Jan 05 21:30:36 compute-0 nova_compute[186018]:           <nova:ip type="fixed" address="10.100.0.13" ipVersion="4"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:         </nova:port>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       </nova:ports>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     </nova:instance>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   </metadata>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   <sysinfo type="smbios">
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <system>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <entry name="manufacturer">RDO</entry>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <entry name="product">OpenStack Compute</entry>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <entry name="serial">c5df5b36-6b5f-4e8d-b9db-aa96dc06de77</entry>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <entry name="uuid">c5df5b36-6b5f-4e8d-b9db-aa96dc06de77</entry>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <entry name="family">Virtual Machine</entry>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     </system>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   </sysinfo>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   <os>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <boot dev="hd"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <smbios mode="sysinfo"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   </os>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   <features>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <acpi/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <apic/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <vmcoreinfo/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   </features>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   <clock offset="utc">
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <timer name="hpet" present="no"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   </clock>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   <cpu mode="host-model" match="exact">
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   </cpu>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   <devices>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77/disk"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <target dev="vda" bus="virtio"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <disk type="file" device="cdrom">
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77/disk.config"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <target dev="sda" bus="sata"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <interface type="ethernet">
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <mac address="fa:16:3e:4e:50:51"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <mtu size="1442"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <target dev="tap7233cede-20"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     </interface>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <serial type="pty">
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <log file="/var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77/console.log" append="off"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     </serial>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <video>
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     </video>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <input type="tablet" bus="usb"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <rng model="virtio">
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <backend model="random">/dev/urandom</backend>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     </rng>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <controller type="usb" index="0"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     <memballoon model="virtio">
Jan 05 21:30:36 compute-0 nova_compute[186018]:       <stats period="10"/>
Jan 05 21:30:36 compute-0 nova_compute[186018]:     </memballoon>
Jan 05 21:30:36 compute-0 nova_compute[186018]:   </devices>
Jan 05 21:30:36 compute-0 nova_compute[186018]: </domain>
Jan 05 21:30:36 compute-0 nova_compute[186018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.656 186022 DEBUG nova.compute.manager [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Preparing to wait for external event network-vif-plugged-7233cede-206c-45d2-9447-e0c1aafe27d2 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.656 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Acquiring lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.656 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.657 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.657 186022 DEBUG nova.virt.libvirt.vif [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:30:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1830148341',display_name='tempest-ServersTestJSON-server-1830148341',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1830148341',id=8,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBILq6JIC25+Sou0Rf1t2/KsITA61NIRv/wVHxX64QYj7AildhzF08Zsxs6//dPLfYO2um7ZJdhhA6xnODFC5CLETwsZMkQPybWjkpb+sCA87oTzjVqI08yeHCNtavr3M4Q==',key_name='tempest-keypair-1161762285',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5ed80fade1274d8785b48dcf02608341',ramdisk_id='',reservation_id='r-1nfa5kac',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServersTestJSON-183434633',owner_user_name='tempest-ServersTestJSON-183434633-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:30:28Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8a7e00bbed09469a93a4c03517990c2b',uuid=c5df5b36-6b5f-4e8d-b9db-aa96dc06de77,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "7233cede-206c-45d2-9447-e0c1aafe27d2", "address": "fa:16:3e:4e:50:51", "network": {"id": "76ad42c4-a28f-4528-9090-217c5e2d84c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1037725494-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ed80fade1274d8785b48dcf02608341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7233cede-20", "ovs_interfaceid": "7233cede-206c-45d2-9447-e0c1aafe27d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.658 186022 DEBUG nova.network.os_vif_util [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Converting VIF {"id": "7233cede-206c-45d2-9447-e0c1aafe27d2", "address": "fa:16:3e:4e:50:51", "network": {"id": "76ad42c4-a28f-4528-9090-217c5e2d84c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1037725494-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ed80fade1274d8785b48dcf02608341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7233cede-20", "ovs_interfaceid": "7233cede-206c-45d2-9447-e0c1aafe27d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.658 186022 DEBUG nova.network.os_vif_util [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:50:51,bridge_name='br-int',has_traffic_filtering=True,id=7233cede-206c-45d2-9447-e0c1aafe27d2,network=Network(76ad42c4-a28f-4528-9090-217c5e2d84c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7233cede-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.659 186022 DEBUG os_vif [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:50:51,bridge_name='br-int',has_traffic_filtering=True,id=7233cede-206c-45d2-9447-e0c1aafe27d2,network=Network(76ad42c4-a28f-4528-9090-217c5e2d84c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7233cede-20') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.659 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.660 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.660 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.666 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.666 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap7233cede-20, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.667 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap7233cede-20, col_values=(('external_ids', {'iface-id': '7233cede-206c-45d2-9447-e0c1aafe27d2', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:4e:50:51', 'vm-uuid': 'c5df5b36-6b5f-4e8d-b9db-aa96dc06de77'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.669 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:36 compute-0 NetworkManager[56598]: <info>  [1767648636.6713] manager: (tap7233cede-20): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/42)
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.676 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.680 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.681 186022 INFO os_vif [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:50:51,bridge_name='br-int',has_traffic_filtering=True,id=7233cede-206c-45d2-9447-e0c1aafe27d2,network=Network(76ad42c4-a28f-4528-9090-217c5e2d84c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7233cede-20')
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.753 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.754 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.754 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] No VIF found with MAC fa:16:3e:4e:50:51, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.755 186022 INFO nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Using config drive
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.822 186022 DEBUG nova.network.neutron [req-0ea733c9-b706-4669-8aed-34348bbbd0e3 req-e1fe783c-25d8-4d1f-b00e-2e07dae95af2 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Updated VIF entry in instance network info cache for port 9fb87af1-df86-49eb-922f-0cb70d0c6ce1. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.823 186022 DEBUG nova.network.neutron [req-0ea733c9-b706-4669-8aed-34348bbbd0e3 req-e1fe783c-25d8-4d1f-b00e-2e07dae95af2 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Updating instance_info_cache with network_info: [{"id": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "address": "fa:16:3e:cc:77:98", "network": {"id": "af412d1c-9dfc-4972-9536-dd32101b5e7b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-260656285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010a085a147e46ac9d1df9d6d76b673a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb87af1-df", "ovs_interfaceid": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:30:36 compute-0 nova_compute[186018]: 2026-01-05 21:30:36.837 186022 DEBUG oslo_concurrency.lockutils [req-0ea733c9-b706-4669-8aed-34348bbbd0e3 req-e1fe783c-25d8-4d1f-b00e-2e07dae95af2 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-55d782b9-fb70-40e6-b501-16b69cd9a3e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.189 186022 INFO nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Creating config drive at /var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77/disk.config
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.204 186022 DEBUG oslo_concurrency.processutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgmeg2i3_ execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.351 186022 DEBUG oslo_concurrency.processutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpgmeg2i3_" returned: 0 in 0.147s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:30:37 compute-0 systemd-udevd[251045]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:30:37 compute-0 kernel: tap7233cede-20: entered promiscuous mode
Jan 05 21:30:37 compute-0 NetworkManager[56598]: <info>  [1767648637.4513] manager: (tap7233cede-20): new Tun device (/org/freedesktop/NetworkManager/Devices/43)
Jan 05 21:30:37 compute-0 ovn_controller[98229]: 2026-01-05T21:30:37Z|00082|binding|INFO|Claiming lport 7233cede-206c-45d2-9447-e0c1aafe27d2 for this chassis.
Jan 05 21:30:37 compute-0 ovn_controller[98229]: 2026-01-05T21:30:37Z|00083|binding|INFO|7233cede-206c-45d2-9447-e0c1aafe27d2: Claiming fa:16:3e:4e:50:51 10.100.0.13
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.456 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:37 compute-0 NetworkManager[56598]: <info>  [1767648637.4731] device (tap7233cede-20): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 21:30:37 compute-0 ovn_controller[98229]: 2026-01-05T21:30:37Z|00084|binding|INFO|Setting lport 7233cede-206c-45d2-9447-e0c1aafe27d2 ovn-installed in OVS
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.478 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:37 compute-0 NetworkManager[56598]: <info>  [1767648637.4830] device (tap7233cede-20): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.485 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:37 compute-0 systemd-machined[157312]: New machine qemu-8-instance-00000008.
Jan 05 21:30:37 compute-0 systemd[1]: Started Virtual Machine qemu-8-instance-00000008.
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.655 186022 DEBUG nova.compute.manager [req-41cc56bc-e79d-4969-b441-1dfa43a61ff9 req-13d3b0dd-95ec-42a4-a4aa-adac7a1da188 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Received event network-vif-plugged-a6acaedc-5f9d-4aca-9e6b-c69623601aca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.657 186022 DEBUG oslo_concurrency.lockutils [req-41cc56bc-e79d-4969-b441-1dfa43a61ff9 req-13d3b0dd-95ec-42a4-a4aa-adac7a1da188 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "62f57876-af2d-4771-bffd-c87b7755cc5c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.657 186022 DEBUG oslo_concurrency.lockutils [req-41cc56bc-e79d-4969-b441-1dfa43a61ff9 req-13d3b0dd-95ec-42a4-a4aa-adac7a1da188 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "62f57876-af2d-4771-bffd-c87b7755cc5c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.657 186022 DEBUG oslo_concurrency.lockutils [req-41cc56bc-e79d-4969-b441-1dfa43a61ff9 req-13d3b0dd-95ec-42a4-a4aa-adac7a1da188 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "62f57876-af2d-4771-bffd-c87b7755cc5c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.657 186022 DEBUG nova.compute.manager [req-41cc56bc-e79d-4969-b441-1dfa43a61ff9 req-13d3b0dd-95ec-42a4-a4aa-adac7a1da188 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] No waiting events found dispatching network-vif-plugged-a6acaedc-5f9d-4aca-9e6b-c69623601aca pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.658 186022 WARNING nova.compute.manager [req-41cc56bc-e79d-4969-b441-1dfa43a61ff9 req-13d3b0dd-95ec-42a4-a4aa-adac7a1da188 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Received unexpected event network-vif-plugged-a6acaedc-5f9d-4aca-9e6b-c69623601aca for instance with vm_state active and task_state None.
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.657 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:50:51 10.100.0.13'], port_security=['fa:16:3e:4e:50:51 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c5df5b36-6b5f-4e8d-b9db-aa96dc06de77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76ad42c4-a28f-4528-9090-217c5e2d84c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ed80fade1274d8785b48dcf02608341', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'ed646749-3acd-4be1-b077-8e69731ce765', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad059b39-1418-4661-bdc4-fccf9d0fe5f0, chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=7233cede-206c-45d2-9447-e0c1aafe27d2) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:30:37 compute-0 ovn_controller[98229]: 2026-01-05T21:30:37Z|00085|binding|INFO|Setting lport 7233cede-206c-45d2-9447-e0c1aafe27d2 up in Southbound
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.658 186022 DEBUG nova.compute.manager [req-41cc56bc-e79d-4969-b441-1dfa43a61ff9 req-13d3b0dd-95ec-42a4-a4aa-adac7a1da188 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Received event network-changed-7233cede-206c-45d2-9447-e0c1aafe27d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.658 107689 INFO neutron.agent.ovn.metadata.agent [-] Port 7233cede-206c-45d2-9447-e0c1aafe27d2 in datapath 76ad42c4-a28f-4528-9090-217c5e2d84c8 bound to our chassis
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.659 186022 DEBUG nova.compute.manager [req-41cc56bc-e79d-4969-b441-1dfa43a61ff9 req-13d3b0dd-95ec-42a4-a4aa-adac7a1da188 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Refreshing instance network info cache due to event network-changed-7233cede-206c-45d2-9447-e0c1aafe27d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.660 186022 DEBUG oslo_concurrency.lockutils [req-41cc56bc-e79d-4969-b441-1dfa43a61ff9 req-13d3b0dd-95ec-42a4-a4aa-adac7a1da188 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-c5df5b36-6b5f-4e8d-b9db-aa96dc06de77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.660 186022 DEBUG oslo_concurrency.lockutils [req-41cc56bc-e79d-4969-b441-1dfa43a61ff9 req-13d3b0dd-95ec-42a4-a4aa-adac7a1da188 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-c5df5b36-6b5f-4e8d-b9db-aa96dc06de77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.661 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 76ad42c4-a28f-4528-9090-217c5e2d84c8
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.661 186022 DEBUG nova.network.neutron [req-41cc56bc-e79d-4969-b441-1dfa43a61ff9 req-13d3b0dd-95ec-42a4-a4aa-adac7a1da188 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Refreshing network info cache for port 7233cede-206c-45d2-9447-e0c1aafe27d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:30:37 compute-0 ovn_controller[98229]: 2026-01-05T21:30:37Z|00086|binding|INFO|Releasing lport 955504bf-4228-404f-a9f1-7ce937b5bf40 from this chassis (sb_readonly=0)
Jan 05 21:30:37 compute-0 ovn_controller[98229]: 2026-01-05T21:30:37Z|00087|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.674 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.675 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[546c5b4c-0a16-4ed6-b037-bacfddc48250]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.678 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap76ad42c4-a1 in ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.682 240489 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap76ad42c4-a0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.682 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[d30f5efc-51b6-4e10-bcf2-fbb145663b97]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.685 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[88e37dd7-ba58-489e-94e7-662f78509c1e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.714 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[942d6118-ee7e-4457-98c5-0c5bdff8d52b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.750 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[694a2c4e-e0f0-49ad-b904-152517b737c6]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.796 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[1f211c23-246b-4321-b8c4-2a169f597106]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:37 compute-0 NetworkManager[56598]: <info>  [1767648637.8141] manager: (tap76ad42c4-a0): new Veth device (/org/freedesktop/NetworkManager/Devices/44)
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.814 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.818 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[54f97899-1248-4aa5-bbd7-e6c51544b00c]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:37 compute-0 ovn_controller[98229]: 2026-01-05T21:30:37Z|00088|binding|INFO|Releasing lport 955504bf-4228-404f-a9f1-7ce937b5bf40 from this chassis (sb_readonly=0)
Jan 05 21:30:37 compute-0 ovn_controller[98229]: 2026-01-05T21:30:37Z|00089|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.844 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.878 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[908c0b61-99fc-45a6-80d6-fc4f224eb46f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.899 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[4b04c129-b46a-4871-b6b7-7ceb5ba97689]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:37 compute-0 NetworkManager[56598]: <info>  [1767648637.9244] device (tap76ad42c4-a0): carrier: link connected
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.929 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[b8f3f0b3-0ee7-445c-b4cf-60d6af0c86c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.939 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648637.9385183, c5df5b36-6b5f-4e8d-b9db-aa96dc06de77 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.940 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] VM Started (Lifecycle Event)
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.946 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[857007b7-d44e-4ca6-aa7c-de759be2833e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76ad42c4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:25:89'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537862, 'reachable_time': 32987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251175, 'error': None, 'target': 'ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.963 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[a551d7ea-8584-4cfc-91ed-a6e3141dac95]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fee1:2589'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 537862, 'tstamp': 537862}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 251176, 'error': None, 'target': 'ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.976 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:30:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:37.979 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[6eeb3b8e-0d8d-4572-8a16-82533647a48a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap76ad42c4-a1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:e1:25:89'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 25], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537862, 'reachable_time': 32987, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 251177, 'error': None, 'target': 'ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.986 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648637.9387703, c5df5b36-6b5f-4e8d-b9db-aa96dc06de77 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:30:37 compute-0 nova_compute[186018]: 2026-01-05 21:30:37.986 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] VM Paused (Lifecycle Event)
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.007 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.014 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:38.022 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[dc91c470-3c40-4711-8590-fca918c62d90]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.036 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:38.085 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[ee6d76bf-7887-4ae3-8255-5b8171a73cfb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:38.087 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76ad42c4-a0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:38.087 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:38.087 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76ad42c4-a0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.089 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:38 compute-0 kernel: tap76ad42c4-a0: entered promiscuous mode
Jan 05 21:30:38 compute-0 NetworkManager[56598]: <info>  [1767648638.0911] manager: (tap76ad42c4-a0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/45)
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:38.092 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap76ad42c4-a0, col_values=(('external_ids', {'iface-id': '4997aca5-5f85-4324-b4a2-5a91f8966a2d'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:38 compute-0 ovn_controller[98229]: 2026-01-05T21:30:38Z|00090|binding|INFO|Releasing lport 4997aca5-5f85-4324-b4a2-5a91f8966a2d from this chassis (sb_readonly=0)
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.097 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:38.098 107689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/76ad42c4-a28f-4528-9090-217c5e2d84c8.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/76ad42c4-a28f-4528-9090-217c5e2d84c8.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:38.100 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[4ebb4bda-597a-43da-a20b-c414796ddf3a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:38.102 107689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]: global
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     log         /dev/log local0 debug
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     log-tag     haproxy-metadata-proxy-76ad42c4-a28f-4528-9090-217c5e2d84c8
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     user        root
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     group       root
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     maxconn     1024
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     pidfile     /var/lib/neutron/external/pids/76ad42c4-a28f-4528-9090-217c5e2d84c8.pid.haproxy
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     daemon
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]: defaults
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     log global
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     mode http
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     option httplog
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     option dontlognull
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     option http-server-close
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     option forwardfor
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     retries                 3
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     timeout http-request    30s
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     timeout connect         30s
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     timeout client          32s
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     timeout server          32s
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     timeout http-keep-alive 30s
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]: listen listener
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     bind 169.254.169.254:80
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     server metadata /var/lib/neutron/metadata_proxy
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:     http-request add-header X-OVN-Network-ID 76ad42c4-a28f-4528-9090-217c5e2d84c8
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 05 21:30:38 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:38.104 107689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8', 'env', 'PROCESS_TAG=haproxy-76ad42c4-a28f-4528-9090-217c5e2d84c8', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/76ad42c4-a28f-4528-9090-217c5e2d84c8.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.114 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:38 compute-0 podman[251209]: 2026-01-05 21:30:38.529938505 +0000 UTC m=+0.053332016 container create 987a0b54321d6dd0494d67e3944ac215ca5e356448d8690d64154ca63895b0a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:30:38 compute-0 systemd[1]: Started libpod-conmon-987a0b54321d6dd0494d67e3944ac215ca5e356448d8690d64154ca63895b0a3.scope.
Jan 05 21:30:38 compute-0 podman[251209]: 2026-01-05 21:30:38.499715019 +0000 UTC m=+0.023108550 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 21:30:38 compute-0 systemd[1]: Started libcrun container.
Jan 05 21:30:38 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/656ce494ca7c77d5623142c1e14d74331a21bb2a43d7098720a9501b0f19c019/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 21:30:38 compute-0 podman[251209]: 2026-01-05 21:30:38.632009662 +0000 UTC m=+0.155403253 container init 987a0b54321d6dd0494d67e3944ac215ca5e356448d8690d64154ca63895b0a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:30:38 compute-0 podman[251209]: 2026-01-05 21:30:38.640982118 +0000 UTC m=+0.164375659 container start 987a0b54321d6dd0494d67e3944ac215ca5e356448d8690d64154ca63895b0a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:30:38 compute-0 neutron-haproxy-ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8[251223]: [NOTICE]   (251227) : New worker (251229) forked
Jan 05 21:30:38 compute-0 neutron-haproxy-ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8[251223]: [NOTICE]   (251227) : Loading success.
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.758 186022 DEBUG nova.compute.manager [req-a707330d-9207-4523-928b-58e3abb2dec4 req-692849e3-9b27-43d7-873a-6111afbcf1c6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Received event network-vif-plugged-9fb87af1-df86-49eb-922f-0cb70d0c6ce1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.758 186022 DEBUG oslo_concurrency.lockutils [req-a707330d-9207-4523-928b-58e3abb2dec4 req-692849e3-9b27-43d7-873a-6111afbcf1c6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.759 186022 DEBUG oslo_concurrency.lockutils [req-a707330d-9207-4523-928b-58e3abb2dec4 req-692849e3-9b27-43d7-873a-6111afbcf1c6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.759 186022 DEBUG oslo_concurrency.lockutils [req-a707330d-9207-4523-928b-58e3abb2dec4 req-692849e3-9b27-43d7-873a-6111afbcf1c6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.760 186022 DEBUG nova.compute.manager [req-a707330d-9207-4523-928b-58e3abb2dec4 req-692849e3-9b27-43d7-873a-6111afbcf1c6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Processing event network-vif-plugged-9fb87af1-df86-49eb-922f-0cb70d0c6ce1 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.760 186022 DEBUG nova.compute.manager [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Instance event wait completed in 2 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.766 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648638.7660983, 55d782b9-fb70-40e6-b501-16b69cd9a3e1 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.766 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] VM Resumed (Lifecycle Event)
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.769 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.775 186022 INFO nova.virt.libvirt.driver [-] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Instance spawned successfully.
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.776 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.796 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.809 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.814 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.814 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.815 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.815 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.816 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.816 186022 DEBUG nova.virt.libvirt.driver [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.848 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.906 186022 INFO nova.compute.manager [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Took 13.28 seconds to spawn the instance on the hypervisor.
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.907 186022 DEBUG nova.compute.manager [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.920 186022 DEBUG nova.network.neutron [req-41cc56bc-e79d-4969-b441-1dfa43a61ff9 req-13d3b0dd-95ec-42a4-a4aa-adac7a1da188 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Updated VIF entry in instance network info cache for port 7233cede-206c-45d2-9447-e0c1aafe27d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.921 186022 DEBUG nova.network.neutron [req-41cc56bc-e79d-4969-b441-1dfa43a61ff9 req-13d3b0dd-95ec-42a4-a4aa-adac7a1da188 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Updating instance_info_cache with network_info: [{"id": "7233cede-206c-45d2-9447-e0c1aafe27d2", "address": "fa:16:3e:4e:50:51", "network": {"id": "76ad42c4-a28f-4528-9090-217c5e2d84c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1037725494-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ed80fade1274d8785b48dcf02608341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7233cede-20", "ovs_interfaceid": "7233cede-206c-45d2-9447-e0c1aafe27d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.938 186022 DEBUG oslo_concurrency.lockutils [req-41cc56bc-e79d-4969-b441-1dfa43a61ff9 req-13d3b0dd-95ec-42a4-a4aa-adac7a1da188 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-c5df5b36-6b5f-4e8d-b9db-aa96dc06de77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:30:38 compute-0 nova_compute[186018]: 2026-01-05 21:30:38.980 186022 INFO nova.compute.manager [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Took 13.73 seconds to build instance.
Jan 05 21:30:39 compute-0 nova_compute[186018]: 2026-01-05 21:30:39.005 186022 DEBUG oslo_concurrency.lockutils [None req-1c6b669b-9bd0-495f-968c-1c1337d8ead9 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 13.823s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:41 compute-0 nova_compute[186018]: 2026-01-05 21:30:41.462 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:41 compute-0 NetworkManager[56598]: <info>  [1767648641.4638] manager: (patch-provnet-f8df9651-98ab-4571-aafb-53926ee41805-to-br-int): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/46)
Jan 05 21:30:41 compute-0 NetworkManager[56598]: <info>  [1767648641.4677] manager: (patch-br-int-to-provnet-f8df9651-98ab-4571-aafb-53926ee41805): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/47)
Jan 05 21:30:41 compute-0 nova_compute[186018]: 2026-01-05 21:30:41.567 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:41 compute-0 ovn_controller[98229]: 2026-01-05T21:30:41Z|00091|binding|INFO|Releasing lport 955504bf-4228-404f-a9f1-7ce937b5bf40 from this chassis (sb_readonly=0)
Jan 05 21:30:41 compute-0 ovn_controller[98229]: 2026-01-05T21:30:41Z|00092|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:30:41 compute-0 ovn_controller[98229]: 2026-01-05T21:30:41Z|00093|binding|INFO|Releasing lport 4997aca5-5f85-4324-b4a2-5a91f8966a2d from this chassis (sb_readonly=0)
Jan 05 21:30:41 compute-0 nova_compute[186018]: 2026-01-05 21:30:41.583 186022 DEBUG nova.compute.manager [req-60ba9ba5-4c21-4263-8ccf-f912a3530182 req-baf30d45-51d3-402e-a252-de5484ba3a55 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Received event network-vif-plugged-9fb87af1-df86-49eb-922f-0cb70d0c6ce1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:30:41 compute-0 nova_compute[186018]: 2026-01-05 21:30:41.584 186022 DEBUG oslo_concurrency.lockutils [req-60ba9ba5-4c21-4263-8ccf-f912a3530182 req-baf30d45-51d3-402e-a252-de5484ba3a55 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:41 compute-0 nova_compute[186018]: 2026-01-05 21:30:41.584 186022 DEBUG oslo_concurrency.lockutils [req-60ba9ba5-4c21-4263-8ccf-f912a3530182 req-baf30d45-51d3-402e-a252-de5484ba3a55 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:41 compute-0 nova_compute[186018]: 2026-01-05 21:30:41.585 186022 DEBUG oslo_concurrency.lockutils [req-60ba9ba5-4c21-4263-8ccf-f912a3530182 req-baf30d45-51d3-402e-a252-de5484ba3a55 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:41 compute-0 nova_compute[186018]: 2026-01-05 21:30:41.585 186022 DEBUG nova.compute.manager [req-60ba9ba5-4c21-4263-8ccf-f912a3530182 req-baf30d45-51d3-402e-a252-de5484ba3a55 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] No waiting events found dispatching network-vif-plugged-9fb87af1-df86-49eb-922f-0cb70d0c6ce1 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:30:41 compute-0 nova_compute[186018]: 2026-01-05 21:30:41.586 186022 WARNING nova.compute.manager [req-60ba9ba5-4c21-4263-8ccf-f912a3530182 req-baf30d45-51d3-402e-a252-de5484ba3a55 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Received unexpected event network-vif-plugged-9fb87af1-df86-49eb-922f-0cb70d0c6ce1 for instance with vm_state active and task_state None.
Jan 05 21:30:41 compute-0 nova_compute[186018]: 2026-01-05 21:30:41.601 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:41 compute-0 nova_compute[186018]: 2026-01-05 21:30:41.669 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:42 compute-0 podman[251240]: 2026-01-05 21:30:42.779126576 +0000 UTC m=+0.123634465 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 05 21:30:42 compute-0 podman[251241]: 2026-01-05 21:30:42.786977233 +0000 UTC m=+0.131526103 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=openstack_network_exporter, distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.expose-services=, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, version=9.6, release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:30:42 compute-0 nova_compute[186018]: 2026-01-05 21:30:42.817 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:42.869 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:42.869 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:42.870 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:43 compute-0 nova_compute[186018]: 2026-01-05 21:30:43.748 186022 DEBUG nova.compute.manager [req-4147e30a-b96a-4e57-a9da-1b2646b196a8 req-da72baae-6734-4481-b332-c0c6459aaf19 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Received event network-changed-a6acaedc-5f9d-4aca-9e6b-c69623601aca external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:30:43 compute-0 nova_compute[186018]: 2026-01-05 21:30:43.749 186022 DEBUG nova.compute.manager [req-4147e30a-b96a-4e57-a9da-1b2646b196a8 req-da72baae-6734-4481-b332-c0c6459aaf19 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Refreshing instance network info cache due to event network-changed-a6acaedc-5f9d-4aca-9e6b-c69623601aca. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:30:43 compute-0 nova_compute[186018]: 2026-01-05 21:30:43.750 186022 DEBUG oslo_concurrency.lockutils [req-4147e30a-b96a-4e57-a9da-1b2646b196a8 req-da72baae-6734-4481-b332-c0c6459aaf19 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:30:43 compute-0 nova_compute[186018]: 2026-01-05 21:30:43.751 186022 DEBUG oslo_concurrency.lockutils [req-4147e30a-b96a-4e57-a9da-1b2646b196a8 req-da72baae-6734-4481-b332-c0c6459aaf19 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:30:43 compute-0 nova_compute[186018]: 2026-01-05 21:30:43.752 186022 DEBUG nova.network.neutron [req-4147e30a-b96a-4e57-a9da-1b2646b196a8 req-da72baae-6734-4481-b332-c0c6459aaf19 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Refreshing network info cache for port a6acaedc-5f9d-4aca-9e6b-c69623601aca _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.542 186022 DEBUG oslo_concurrency.lockutils [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Acquiring lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.543 186022 DEBUG oslo_concurrency.lockutils [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.544 186022 DEBUG oslo_concurrency.lockutils [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Acquiring lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.544 186022 DEBUG oslo_concurrency.lockutils [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.545 186022 DEBUG oslo_concurrency.lockutils [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.546 186022 INFO nova.compute.manager [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Terminating instance
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.547 186022 DEBUG nova.compute.manager [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 21:30:44 compute-0 kernel: tap9fb87af1-df (unregistering): left promiscuous mode
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.597 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:44 compute-0 ovn_controller[98229]: 2026-01-05T21:30:44Z|00094|binding|INFO|Releasing lport 9fb87af1-df86-49eb-922f-0cb70d0c6ce1 from this chassis (sb_readonly=0)
Jan 05 21:30:44 compute-0 ovn_controller[98229]: 2026-01-05T21:30:44Z|00095|binding|INFO|Setting lport 9fb87af1-df86-49eb-922f-0cb70d0c6ce1 down in Southbound
Jan 05 21:30:44 compute-0 ovn_controller[98229]: 2026-01-05T21:30:44Z|00096|binding|INFO|Removing iface tap9fb87af1-df ovn-installed in OVS
Jan 05 21:30:44 compute-0 NetworkManager[56598]: <info>  [1767648644.6038] device (tap9fb87af1-df): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.614 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:44.614 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:cc:77:98 10.100.0.9'], port_security=['fa:16:3e:cc:77:98 10.100.0.9'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.9/28', 'neutron:device_id': '55d782b9-fb70-40e6-b501-16b69cd9a3e1', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-af412d1c-9dfc-4972-9536-dd32101b5e7b', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '010a085a147e46ac9d1df9d6d76b673a', 'neutron:revision_number': '4', 'neutron:security_group_ids': '7e25d908-8ce0-4e4e-b658-e4bc93ff6fb9', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.218'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=da5921cc-eaf7-43ac-becb-44ae4249a9aa, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=9fb87af1-df86-49eb-922f-0cb70d0c6ce1) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:30:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:44.615 107689 INFO neutron.agent.ovn.metadata.agent [-] Port 9fb87af1-df86-49eb-922f-0cb70d0c6ce1 in datapath af412d1c-9dfc-4972-9536-dd32101b5e7b unbound from our chassis
Jan 05 21:30:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:44.617 107689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network af412d1c-9dfc-4972-9536-dd32101b5e7b, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 05 21:30:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:44.619 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[728d9784-29c9-4c87-8d3f-38bf6ca9750f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:44.619 107689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b namespace which is not needed anymore
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.631 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:44 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Deactivated successfully.
Jan 05 21:30:44 compute-0 systemd[1]: machine-qemu\x2d7\x2dinstance\x2d00000007.scope: Consumed 6.582s CPU time.
Jan 05 21:30:44 compute-0 systemd-machined[157312]: Machine qemu-7-instance-00000007 terminated.
Jan 05 21:30:44 compute-0 neutron-haproxy-ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b[251116]: [NOTICE]   (251120) : haproxy version is 2.8.14-c23fe91
Jan 05 21:30:44 compute-0 neutron-haproxy-ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b[251116]: [NOTICE]   (251120) : path to executable is /usr/sbin/haproxy
Jan 05 21:30:44 compute-0 neutron-haproxy-ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b[251116]: [WARNING]  (251120) : Exiting Master process...
Jan 05 21:30:44 compute-0 neutron-haproxy-ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b[251116]: [ALERT]    (251120) : Current worker (251122) exited with code 143 (Terminated)
Jan 05 21:30:44 compute-0 neutron-haproxy-ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b[251116]: [WARNING]  (251120) : All workers exited. Exiting... (0)
Jan 05 21:30:44 compute-0 systemd[1]: libpod-acd4e7e0ab8f14dca24712cdf21519dd5de920e010e268320a537e66c1837f72.scope: Deactivated successfully.
Jan 05 21:30:44 compute-0 podman[251305]: 2026-01-05 21:30:44.824075126 +0000 UTC m=+0.085893162 container died acd4e7e0ab8f14dca24712cdf21519dd5de920e010e268320a537e66c1837f72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.824 186022 INFO nova.virt.libvirt.driver [-] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Instance destroyed successfully.
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.825 186022 DEBUG nova.objects.instance [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lazy-loading 'resources' on Instance uuid 55d782b9-fb70-40e6-b501-16b69cd9a3e1 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.844 186022 DEBUG nova.virt.libvirt.vif [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:30:24Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestManualDisk-server-807206790',display_name='tempest-ServersTestManualDisk-server-807206790',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestmanualdisk-server-807206790',id=7,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBGDeZJrrYdVqwRj/4jRj/LPny3LQ3PCtmjARFkvUU8fz8wG9dWaDkuKn4OY0av2cqn2g8GV20h8KSW13w9bOpoKWJn0Q7kZWAaYMkvjchcLREDNAOo4RbvVcKtgfZnGYkQ==',key_name='tempest-keypair-558169748',keypairs=<?>,launch_index=0,launched_at=2026-01-05T21:30:38Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='010a085a147e46ac9d1df9d6d76b673a',ramdisk_id='',reservation_id='r-0ahzr0en',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestManualDisk-1483019970',owner_user_name='tempest-ServersTestManualDisk-1483019970-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T21:30:38Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='69ccd256a35f415ca66bb59592f26ea6',uuid=55d782b9-fb70-40e6-b501-16b69cd9a3e1,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "address": "fa:16:3e:cc:77:98", "network": {"id": "af412d1c-9dfc-4972-9536-dd32101b5e7b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-260656285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010a085a147e46ac9d1df9d6d76b673a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb87af1-df", "ovs_interfaceid": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.844 186022 DEBUG nova.network.os_vif_util [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Converting VIF {"id": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "address": "fa:16:3e:cc:77:98", "network": {"id": "af412d1c-9dfc-4972-9536-dd32101b5e7b", "bridge": "br-int", "label": "tempest-ServersTestManualDisk-260656285-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.9", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "010a085a147e46ac9d1df9d6d76b673a", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap9fb87af1-df", "ovs_interfaceid": "9fb87af1-df86-49eb-922f-0cb70d0c6ce1", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.846 186022 DEBUG nova.network.os_vif_util [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:cc:77:98,bridge_name='br-int',has_traffic_filtering=True,id=9fb87af1-df86-49eb-922f-0cb70d0c6ce1,network=Network(af412d1c-9dfc-4972-9536-dd32101b5e7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb87af1-df') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.847 186022 DEBUG os_vif [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:77:98,bridge_name='br-int',has_traffic_filtering=True,id=9fb87af1-df86-49eb-922f-0cb70d0c6ce1,network=Network(af412d1c-9dfc-4972-9536-dd32101b5e7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb87af1-df') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.848 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.849 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9fb87af1-df, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.856 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.859 186022 INFO os_vif [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:cc:77:98,bridge_name='br-int',has_traffic_filtering=True,id=9fb87af1-df86-49eb-922f-0cb70d0c6ce1,network=Network(af412d1c-9dfc-4972-9536-dd32101b5e7b),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap9fb87af1-df')
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.861 186022 INFO nova.virt.libvirt.driver [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Deleting instance files /var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1_del
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.862 186022 INFO nova.virt.libvirt.driver [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Deletion of /var/lib/nova/instances/55d782b9-fb70-40e6-b501-16b69cd9a3e1_del complete
Jan 05 21:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-acd4e7e0ab8f14dca24712cdf21519dd5de920e010e268320a537e66c1837f72-userdata-shm.mount: Deactivated successfully.
Jan 05 21:30:44 compute-0 systemd[1]: var-lib-containers-storage-overlay-a8dbaafbaeae859c381fac8997a0f49c3a44ab14eaccdcb0402a1b2cf41b682b-merged.mount: Deactivated successfully.
Jan 05 21:30:44 compute-0 podman[251305]: 2026-01-05 21:30:44.893393171 +0000 UTC m=+0.155211217 container cleanup acd4e7e0ab8f14dca24712cdf21519dd5de920e010e268320a537e66c1837f72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 05 21:30:44 compute-0 systemd[1]: libpod-conmon-acd4e7e0ab8f14dca24712cdf21519dd5de920e010e268320a537e66c1837f72.scope: Deactivated successfully.
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.948 186022 INFO nova.compute.manager [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Took 0.40 seconds to destroy the instance on the hypervisor.
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.948 186022 DEBUG oslo.service.loopingcall [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.949 186022 DEBUG nova.compute.manager [-] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 21:30:44 compute-0 nova_compute[186018]: 2026-01-05 21:30:44.949 186022 DEBUG nova.network.neutron [-] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 21:30:45 compute-0 podman[251347]: 2026-01-05 21:30:45.001643691 +0000 UTC m=+0.074009899 container remove acd4e7e0ab8f14dca24712cdf21519dd5de920e010e268320a537e66c1837f72 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 21:30:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:45.010 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[98fa4e06-dab5-4cb9-b422-86f4eae73882]: (4, ('Mon Jan  5 09:30:44 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b (acd4e7e0ab8f14dca24712cdf21519dd5de920e010e268320a537e66c1837f72)\nacd4e7e0ab8f14dca24712cdf21519dd5de920e010e268320a537e66c1837f72\nMon Jan  5 09:30:44 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b (acd4e7e0ab8f14dca24712cdf21519dd5de920e010e268320a537e66c1837f72)\nacd4e7e0ab8f14dca24712cdf21519dd5de920e010e268320a537e66c1837f72\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:45.012 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[945b6244-a43a-4ad1-926c-32f687000852]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:45.013 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaf412d1c-90, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:45 compute-0 kernel: tapaf412d1c-90: left promiscuous mode
Jan 05 21:30:45 compute-0 nova_compute[186018]: 2026-01-05 21:30:45.026 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:45 compute-0 nova_compute[186018]: 2026-01-05 21:30:45.030 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:45.033 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[ec7dcfdb-6b02-4ba2-9954-87cb72b8630d]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:45.052 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[9b61ce39-de27-4f26-9d6e-0c8032b7317d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:45.053 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[ee0e8b95-b348-4fda-975a-e9f325924438]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:45.070 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[0ebb084c-4365-4368-b2b2-f282a57a52a3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537616, 'reachable_time': 23889, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251361, 'error': None, 'target': 'ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:45 compute-0 systemd[1]: run-netns-ovnmeta\x2daf412d1c\x2d9dfc\x2d4972\x2d9536\x2ddd32101b5e7b.mount: Deactivated successfully.
Jan 05 21:30:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:45.087 108136 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-af412d1c-9dfc-4972-9536-dd32101b5e7b deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 05 21:30:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:45.087 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[3e65e41c-283c-4dc0-aea6-3e97d0751635]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.598 186022 DEBUG nova.compute.manager [req-41567328-2d24-42ff-b6f0-c7f9fd7fe8af req-58bb3da7-0696-4d4a-835b-b0a370d5eca1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Received event network-vif-plugged-7233cede-206c-45d2-9447-e0c1aafe27d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.601 186022 DEBUG oslo_concurrency.lockutils [req-41567328-2d24-42ff-b6f0-c7f9fd7fe8af req-58bb3da7-0696-4d4a-835b-b0a370d5eca1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.602 186022 DEBUG oslo_concurrency.lockutils [req-41567328-2d24-42ff-b6f0-c7f9fd7fe8af req-58bb3da7-0696-4d4a-835b-b0a370d5eca1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.603 186022 DEBUG oslo_concurrency.lockutils [req-41567328-2d24-42ff-b6f0-c7f9fd7fe8af req-58bb3da7-0696-4d4a-835b-b0a370d5eca1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.603 186022 DEBUG nova.compute.manager [req-41567328-2d24-42ff-b6f0-c7f9fd7fe8af req-58bb3da7-0696-4d4a-835b-b0a370d5eca1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Processing event network-vif-plugged-7233cede-206c-45d2-9447-e0c1aafe27d2 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.605 186022 DEBUG nova.compute.manager [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Instance event wait completed in 8 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.615 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648646.6148179, c5df5b36-6b5f-4e8d-b9db-aa96dc06de77 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.615 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] VM Resumed (Lifecycle Event)
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.617 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.625 186022 INFO nova.virt.libvirt.driver [-] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Instance spawned successfully.
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.626 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.639 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.650 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.656 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.657 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.658 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.659 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.660 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.661 186022 DEBUG nova.virt.libvirt.driver [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.697 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.750 186022 INFO nova.compute.manager [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Took 18.07 seconds to spawn the instance on the hypervisor.
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.751 186022 DEBUG nova.compute.manager [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.862 186022 INFO nova.compute.manager [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Took 18.70 seconds to build instance.
Jan 05 21:30:46 compute-0 nova_compute[186018]: 2026-01-05 21:30:46.942 186022 DEBUG oslo_concurrency.lockutils [None req-41436bed-6565-4db8-8a6a-b4c13469e191 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 18.866s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:47 compute-0 nova_compute[186018]: 2026-01-05 21:30:47.575 186022 DEBUG nova.network.neutron [req-4147e30a-b96a-4e57-a9da-1b2646b196a8 req-da72baae-6734-4481-b332-c0c6459aaf19 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updated VIF entry in instance network info cache for port a6acaedc-5f9d-4aca-9e6b-c69623601aca. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:30:47 compute-0 nova_compute[186018]: 2026-01-05 21:30:47.577 186022 DEBUG nova.network.neutron [req-4147e30a-b96a-4e57-a9da-1b2646b196a8 req-da72baae-6734-4481-b332-c0c6459aaf19 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updating instance_info_cache with network_info: [{"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:30:47 compute-0 nova_compute[186018]: 2026-01-05 21:30:47.609 186022 DEBUG oslo_concurrency.lockutils [req-4147e30a-b96a-4e57-a9da-1b2646b196a8 req-da72baae-6734-4481-b332-c0c6459aaf19 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:30:47 compute-0 nova_compute[186018]: 2026-01-05 21:30:47.610 186022 DEBUG nova.compute.manager [req-4147e30a-b96a-4e57-a9da-1b2646b196a8 req-da72baae-6734-4481-b332-c0c6459aaf19 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Received event network-changed-9fb87af1-df86-49eb-922f-0cb70d0c6ce1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:30:47 compute-0 nova_compute[186018]: 2026-01-05 21:30:47.611 186022 DEBUG nova.compute.manager [req-4147e30a-b96a-4e57-a9da-1b2646b196a8 req-da72baae-6734-4481-b332-c0c6459aaf19 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Refreshing instance network info cache due to event network-changed-9fb87af1-df86-49eb-922f-0cb70d0c6ce1. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:30:47 compute-0 nova_compute[186018]: 2026-01-05 21:30:47.611 186022 DEBUG oslo_concurrency.lockutils [req-4147e30a-b96a-4e57-a9da-1b2646b196a8 req-da72baae-6734-4481-b332-c0c6459aaf19 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-55d782b9-fb70-40e6-b501-16b69cd9a3e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:30:47 compute-0 nova_compute[186018]: 2026-01-05 21:30:47.612 186022 DEBUG oslo_concurrency.lockutils [req-4147e30a-b96a-4e57-a9da-1b2646b196a8 req-da72baae-6734-4481-b332-c0c6459aaf19 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-55d782b9-fb70-40e6-b501-16b69cd9a3e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:30:47 compute-0 nova_compute[186018]: 2026-01-05 21:30:47.612 186022 DEBUG nova.network.neutron [req-4147e30a-b96a-4e57-a9da-1b2646b196a8 req-da72baae-6734-4481-b332-c0c6459aaf19 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Refreshing network info cache for port 9fb87af1-df86-49eb-922f-0cb70d0c6ce1 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:30:47 compute-0 podman[251363]: 2026-01-05 21:30:47.733789093 +0000 UTC m=+0.081178258 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 21:30:47 compute-0 nova_compute[186018]: 2026-01-05 21:30:47.745 186022 DEBUG nova.network.neutron [-] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:30:47 compute-0 podman[251362]: 2026-01-05 21:30:47.746289902 +0000 UTC m=+0.094194161 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 21:30:47 compute-0 nova_compute[186018]: 2026-01-05 21:30:47.780 186022 INFO nova.compute.manager [-] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Took 2.83 seconds to deallocate network for instance.
Jan 05 21:30:47 compute-0 nova_compute[186018]: 2026-01-05 21:30:47.820 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:47 compute-0 nova_compute[186018]: 2026-01-05 21:30:47.851 186022 DEBUG oslo_concurrency.lockutils [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:47 compute-0 nova_compute[186018]: 2026-01-05 21:30:47.852 186022 DEBUG oslo_concurrency.lockutils [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:47 compute-0 nova_compute[186018]: 2026-01-05 21:30:47.976 186022 DEBUG nova.compute.provider_tree [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:30:47 compute-0 nova_compute[186018]: 2026-01-05 21:30:47.990 186022 DEBUG nova.scheduler.client.report [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:30:48 compute-0 nova_compute[186018]: 2026-01-05 21:30:48.015 186022 DEBUG oslo_concurrency.lockutils [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:48 compute-0 ovn_controller[98229]: 2026-01-05T21:30:48Z|00097|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:30:48 compute-0 ovn_controller[98229]: 2026-01-05T21:30:48Z|00098|binding|INFO|Releasing lport 4997aca5-5f85-4324-b4a2-5a91f8966a2d from this chassis (sb_readonly=0)
Jan 05 21:30:48 compute-0 nova_compute[186018]: 2026-01-05 21:30:48.042 186022 INFO nova.network.neutron [req-4147e30a-b96a-4e57-a9da-1b2646b196a8 req-da72baae-6734-4481-b332-c0c6459aaf19 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Port 9fb87af1-df86-49eb-922f-0cb70d0c6ce1 from network info_cache is no longer associated with instance in Neutron. Removing from network info_cache.
Jan 05 21:30:48 compute-0 nova_compute[186018]: 2026-01-05 21:30:48.042 186022 DEBUG nova.network.neutron [req-4147e30a-b96a-4e57-a9da-1b2646b196a8 req-da72baae-6734-4481-b332-c0c6459aaf19 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:30:48 compute-0 nova_compute[186018]: 2026-01-05 21:30:48.060 186022 INFO nova.scheduler.client.report [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Deleted allocations for instance 55d782b9-fb70-40e6-b501-16b69cd9a3e1
Jan 05 21:30:48 compute-0 nova_compute[186018]: 2026-01-05 21:30:48.083 186022 DEBUG oslo_concurrency.lockutils [req-4147e30a-b96a-4e57-a9da-1b2646b196a8 req-da72baae-6734-4481-b332-c0c6459aaf19 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-55d782b9-fb70-40e6-b501-16b69cd9a3e1" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:30:48 compute-0 nova_compute[186018]: 2026-01-05 21:30:48.107 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:48 compute-0 nova_compute[186018]: 2026-01-05 21:30:48.184 186022 DEBUG oslo_concurrency.lockutils [None req-54da8d60-c095-4935-8f7f-c2ac54843aad 69ccd256a35f415ca66bb59592f26ea6 010a085a147e46ac9d1df9d6d76b673a - - default default] Lock "55d782b9-fb70-40e6-b501-16b69cd9a3e1" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.640s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:49 compute-0 nova_compute[186018]: 2026-01-05 21:30:49.183 186022 DEBUG nova.compute.manager [req-a74f9ca3-5058-4aec-8bc4-5c6461876bcc req-23bc4b12-ca19-42ec-b86b-6c7f10a41b2a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Received event network-vif-plugged-7233cede-206c-45d2-9447-e0c1aafe27d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:30:49 compute-0 nova_compute[186018]: 2026-01-05 21:30:49.185 186022 DEBUG oslo_concurrency.lockutils [req-a74f9ca3-5058-4aec-8bc4-5c6461876bcc req-23bc4b12-ca19-42ec-b86b-6c7f10a41b2a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:49 compute-0 nova_compute[186018]: 2026-01-05 21:30:49.186 186022 DEBUG oslo_concurrency.lockutils [req-a74f9ca3-5058-4aec-8bc4-5c6461876bcc req-23bc4b12-ca19-42ec-b86b-6c7f10a41b2a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:49 compute-0 nova_compute[186018]: 2026-01-05 21:30:49.187 186022 DEBUG oslo_concurrency.lockutils [req-a74f9ca3-5058-4aec-8bc4-5c6461876bcc req-23bc4b12-ca19-42ec-b86b-6c7f10a41b2a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:49 compute-0 nova_compute[186018]: 2026-01-05 21:30:49.188 186022 DEBUG nova.compute.manager [req-a74f9ca3-5058-4aec-8bc4-5c6461876bcc req-23bc4b12-ca19-42ec-b86b-6c7f10a41b2a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] No waiting events found dispatching network-vif-plugged-7233cede-206c-45d2-9447-e0c1aafe27d2 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:30:49 compute-0 nova_compute[186018]: 2026-01-05 21:30:49.189 186022 WARNING nova.compute.manager [req-a74f9ca3-5058-4aec-8bc4-5c6461876bcc req-23bc4b12-ca19-42ec-b86b-6c7f10a41b2a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Received unexpected event network-vif-plugged-7233cede-206c-45d2-9447-e0c1aafe27d2 for instance with vm_state active and task_state None.
Jan 05 21:30:49 compute-0 nova_compute[186018]: 2026-01-05 21:30:49.190 186022 DEBUG nova.compute.manager [req-a74f9ca3-5058-4aec-8bc4-5c6461876bcc req-23bc4b12-ca19-42ec-b86b-6c7f10a41b2a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Received event network-vif-deleted-9fb87af1-df86-49eb-922f-0cb70d0c6ce1 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:30:49 compute-0 nova_compute[186018]: 2026-01-05 21:30:49.853 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:52 compute-0 nova_compute[186018]: 2026-01-05 21:30:52.824 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:54 compute-0 nova_compute[186018]: 2026-01-05 21:30:54.596 186022 DEBUG nova.compute.manager [req-389bb025-66b2-498c-b19b-c3a4b0ca249b req-28f71f6a-c736-49ae-8ed1-8d11659c73ba 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Received event network-changed-7233cede-206c-45d2-9447-e0c1aafe27d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:30:54 compute-0 nova_compute[186018]: 2026-01-05 21:30:54.598 186022 DEBUG nova.compute.manager [req-389bb025-66b2-498c-b19b-c3a4b0ca249b req-28f71f6a-c736-49ae-8ed1-8d11659c73ba 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Refreshing instance network info cache due to event network-changed-7233cede-206c-45d2-9447-e0c1aafe27d2. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:30:54 compute-0 nova_compute[186018]: 2026-01-05 21:30:54.598 186022 DEBUG oslo_concurrency.lockutils [req-389bb025-66b2-498c-b19b-c3a4b0ca249b req-28f71f6a-c736-49ae-8ed1-8d11659c73ba 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-c5df5b36-6b5f-4e8d-b9db-aa96dc06de77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:30:54 compute-0 nova_compute[186018]: 2026-01-05 21:30:54.599 186022 DEBUG oslo_concurrency.lockutils [req-389bb025-66b2-498c-b19b-c3a4b0ca249b req-28f71f6a-c736-49ae-8ed1-8d11659c73ba 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-c5df5b36-6b5f-4e8d-b9db-aa96dc06de77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:30:54 compute-0 nova_compute[186018]: 2026-01-05 21:30:54.599 186022 DEBUG nova.network.neutron [req-389bb025-66b2-498c-b19b-c3a4b0ca249b req-28f71f6a-c736-49ae-8ed1-8d11659c73ba 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Refreshing network info cache for port 7233cede-206c-45d2-9447-e0c1aafe27d2 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:30:54 compute-0 podman[251404]: 2026-01-05 21:30:54.827909097 +0000 UTC m=+0.072418598 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:30:54 compute-0 nova_compute[186018]: 2026-01-05 21:30:54.857 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:57 compute-0 ovn_controller[98229]: 2026-01-05T21:30:57Z|00099|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:30:57 compute-0 ovn_controller[98229]: 2026-01-05T21:30:57Z|00100|binding|INFO|Releasing lport 4997aca5-5f85-4324-b4a2-5a91f8966a2d from this chassis (sb_readonly=0)
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.543 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.629 186022 DEBUG oslo_concurrency.lockutils [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Acquiring lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.630 186022 DEBUG oslo_concurrency.lockutils [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.630 186022 DEBUG oslo_concurrency.lockutils [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Acquiring lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.631 186022 DEBUG oslo_concurrency.lockutils [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.631 186022 DEBUG oslo_concurrency.lockutils [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.633 186022 INFO nova.compute.manager [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Terminating instance
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.634 186022 DEBUG nova.compute.manager [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 21:30:57 compute-0 kernel: tap7233cede-20 (unregistering): left promiscuous mode
Jan 05 21:30:57 compute-0 NetworkManager[56598]: <info>  [1767648657.6622] device (tap7233cede-20): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 21:30:57 compute-0 ovn_controller[98229]: 2026-01-05T21:30:57Z|00101|binding|INFO|Releasing lport 7233cede-206c-45d2-9447-e0c1aafe27d2 from this chassis (sb_readonly=0)
Jan 05 21:30:57 compute-0 ovn_controller[98229]: 2026-01-05T21:30:57Z|00102|binding|INFO|Setting lport 7233cede-206c-45d2-9447-e0c1aafe27d2 down in Southbound
Jan 05 21:30:57 compute-0 ovn_controller[98229]: 2026-01-05T21:30:57Z|00103|binding|INFO|Removing iface tap7233cede-20 ovn-installed in OVS
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.672 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:57.677 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:4e:50:51 10.100.0.13'], port_security=['fa:16:3e:4e:50:51 10.100.0.13'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.13/28', 'neutron:device_id': 'c5df5b36-6b5f-4e8d-b9db-aa96dc06de77', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-76ad42c4-a28f-4528-9090-217c5e2d84c8', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5ed80fade1274d8785b48dcf02608341', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'ed646749-3acd-4be1-b077-8e69731ce765', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.249'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=ad059b39-1418-4661-bdc4-fccf9d0fe5f0, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=7233cede-206c-45d2-9447-e0c1aafe27d2) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:30:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:57.678 107689 INFO neutron.agent.ovn.metadata.agent [-] Port 7233cede-206c-45d2-9447-e0c1aafe27d2 in datapath 76ad42c4-a28f-4528-9090-217c5e2d84c8 unbound from our chassis
Jan 05 21:30:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:57.679 107689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 76ad42c4-a28f-4528-9090-217c5e2d84c8, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.685 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:57.682 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[07a92b7f-1f5d-4dcb-a203-d385653591d5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:57.683 107689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8 namespace which is not needed anymore
Jan 05 21:30:57 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Deactivated successfully.
Jan 05 21:30:57 compute-0 systemd[1]: machine-qemu\x2d8\x2dinstance\x2d00000008.scope: Consumed 11.758s CPU time.
Jan 05 21:30:57 compute-0 systemd-machined[157312]: Machine qemu-8-instance-00000008 terminated.
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.825 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:57 compute-0 neutron-haproxy-ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8[251223]: [NOTICE]   (251227) : haproxy version is 2.8.14-c23fe91
Jan 05 21:30:57 compute-0 neutron-haproxy-ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8[251223]: [NOTICE]   (251227) : path to executable is /usr/sbin/haproxy
Jan 05 21:30:57 compute-0 neutron-haproxy-ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8[251223]: [WARNING]  (251227) : Exiting Master process...
Jan 05 21:30:57 compute-0 neutron-haproxy-ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8[251223]: [ALERT]    (251227) : Current worker (251229) exited with code 143 (Terminated)
Jan 05 21:30:57 compute-0 neutron-haproxy-ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8[251223]: [WARNING]  (251227) : All workers exited. Exiting... (0)
Jan 05 21:30:57 compute-0 systemd[1]: libpod-987a0b54321d6dd0494d67e3944ac215ca5e356448d8690d64154ca63895b0a3.scope: Deactivated successfully.
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.858 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:57 compute-0 podman[251451]: 2026-01-05 21:30:57.860068517 +0000 UTC m=+0.057243258 container died 987a0b54321d6dd0494d67e3944ac215ca5e356448d8690d64154ca63895b0a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.865 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.906 186022 INFO nova.virt.libvirt.driver [-] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Instance destroyed successfully.
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.908 186022 DEBUG nova.objects.instance [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lazy-loading 'resources' on Instance uuid c5df5b36-6b5f-4e8d-b9db-aa96dc06de77 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:30:57 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-987a0b54321d6dd0494d67e3944ac215ca5e356448d8690d64154ca63895b0a3-userdata-shm.mount: Deactivated successfully.
Jan 05 21:30:57 compute-0 systemd[1]: var-lib-containers-storage-overlay-656ce494ca7c77d5623142c1e14d74331a21bb2a43d7098720a9501b0f19c019-merged.mount: Deactivated successfully.
Jan 05 21:30:57 compute-0 podman[251451]: 2026-01-05 21:30:57.920067387 +0000 UTC m=+0.117242128 container cleanup 987a0b54321d6dd0494d67e3944ac215ca5e356448d8690d64154ca63895b0a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 05 21:30:57 compute-0 systemd[1]: libpod-conmon-987a0b54321d6dd0494d67e3944ac215ca5e356448d8690d64154ca63895b0a3.scope: Deactivated successfully.
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.938 186022 DEBUG nova.virt.libvirt.vif [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] vif_type=ovs instance=Instance(access_ip_v4=1.1.1.1,access_ip_v6=::babe:dc0c:1602,architecture=None,auto_disk_config=True,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:30:27Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServersTestJSON-server-1830148341',display_name='tempest-ServersTestJSON-server-1830148341',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serverstestjson-server-1830148341',id=8,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBILq6JIC25+Sou0Rf1t2/KsITA61NIRv/wVHxX64QYj7AildhzF08Zsxs6//dPLfYO2um7ZJdhhA6xnODFC5CLETwsZMkQPybWjkpb+sCA87oTzjVqI08yeHCNtavr3M4Q==',key_name='tempest-keypair-1161762285',keypairs=<?>,launch_index=0,launched_at=2026-01-05T21:30:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={hello='world'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5ed80fade1274d8785b48dcf02608341',ramdisk_id='',reservation_id='r-1nfa5kac',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServersTestJSON-183434633',owner_user_name='tempest-ServersTestJSON-183434633-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T21:30:46Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='8a7e00bbed09469a93a4c03517990c2b',uuid=c5df5b36-6b5f-4e8d-b9db-aa96dc06de77,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "7233cede-206c-45d2-9447-e0c1aafe27d2", "address": "fa:16:3e:4e:50:51", "network": {"id": "76ad42c4-a28f-4528-9090-217c5e2d84c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1037725494-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ed80fade1274d8785b48dcf02608341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7233cede-20", "ovs_interfaceid": "7233cede-206c-45d2-9447-e0c1aafe27d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.940 186022 DEBUG nova.network.os_vif_util [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Converting VIF {"id": "7233cede-206c-45d2-9447-e0c1aafe27d2", "address": "fa:16:3e:4e:50:51", "network": {"id": "76ad42c4-a28f-4528-9090-217c5e2d84c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1037725494-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ed80fade1274d8785b48dcf02608341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7233cede-20", "ovs_interfaceid": "7233cede-206c-45d2-9447-e0c1aafe27d2", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.940 186022 DEBUG nova.network.os_vif_util [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:4e:50:51,bridge_name='br-int',has_traffic_filtering=True,id=7233cede-206c-45d2-9447-e0c1aafe27d2,network=Network(76ad42c4-a28f-4528-9090-217c5e2d84c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7233cede-20') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.941 186022 DEBUG os_vif [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:50:51,bridge_name='br-int',has_traffic_filtering=True,id=7233cede-206c-45d2-9447-e0c1aafe27d2,network=Network(76ad42c4-a28f-4528-9090-217c5e2d84c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7233cede-20') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.944 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.945 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap7233cede-20, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.947 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.949 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.953 186022 INFO os_vif [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:4e:50:51,bridge_name='br-int',has_traffic_filtering=True,id=7233cede-206c-45d2-9447-e0c1aafe27d2,network=Network(76ad42c4-a28f-4528-9090-217c5e2d84c8),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap7233cede-20')
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.954 186022 INFO nova.virt.libvirt.driver [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Deleting instance files /var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77_del
Jan 05 21:30:57 compute-0 nova_compute[186018]: 2026-01-05 21:30:57.955 186022 INFO nova.virt.libvirt.driver [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Deletion of /var/lib/nova/instances/c5df5b36-6b5f-4e8d-b9db-aa96dc06de77_del complete
Jan 05 21:30:58 compute-0 podman[251495]: 2026-01-05 21:30:58.018215131 +0000 UTC m=+0.054706571 container remove 987a0b54321d6dd0494d67e3944ac215ca5e356448d8690d64154ca63895b0a3 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0)
Jan 05 21:30:58 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:58.040 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[34c8db61-bb18-4f4a-bd47-907cbb0c8ac5]: (4, ('Mon Jan  5 09:30:57 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8 (987a0b54321d6dd0494d67e3944ac215ca5e356448d8690d64154ca63895b0a3)\n987a0b54321d6dd0494d67e3944ac215ca5e356448d8690d64154ca63895b0a3\nMon Jan  5 09:30:57 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8 (987a0b54321d6dd0494d67e3944ac215ca5e356448d8690d64154ca63895b0a3)\n987a0b54321d6dd0494d67e3944ac215ca5e356448d8690d64154ca63895b0a3\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:58 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:58.042 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[478cd5e6-82a9-4a20-82e9-f06eaec1365f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:58 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:58.043 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76ad42c4-a0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:30:58 compute-0 nova_compute[186018]: 2026-01-05 21:30:58.045 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:58 compute-0 kernel: tap76ad42c4-a0: left promiscuous mode
Jan 05 21:30:58 compute-0 nova_compute[186018]: 2026-01-05 21:30:58.051 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:58 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:58.053 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[608cc338-b83b-40b1-9265-f6d90bf1c03f]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:58 compute-0 nova_compute[186018]: 2026-01-05 21:30:58.066 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:30:58 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:58.075 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[e8424c8a-dcbb-4d79-80f0-caf91579b7e6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:58 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:58.077 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[3898c37c-79a4-4769-8818-fab45793e48c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:58 compute-0 nova_compute[186018]: 2026-01-05 21:30:58.086 186022 INFO nova.compute.manager [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Took 0.45 seconds to destroy the instance on the hypervisor.
Jan 05 21:30:58 compute-0 nova_compute[186018]: 2026-01-05 21:30:58.087 186022 DEBUG oslo.service.loopingcall [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 21:30:58 compute-0 nova_compute[186018]: 2026-01-05 21:30:58.087 186022 DEBUG nova.compute.manager [-] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 21:30:58 compute-0 nova_compute[186018]: 2026-01-05 21:30:58.087 186022 DEBUG nova.network.neutron [-] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 21:30:58 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:58.096 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[fc4f85a9-7113-484e-8b42-cb87e933498f]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 537849, 'reachable_time': 16343, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 251510, 'error': None, 'target': 'ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:58 compute-0 systemd[1]: run-netns-ovnmeta\x2d76ad42c4\x2da28f\x2d4528\x2d9090\x2d217c5e2d84c8.mount: Deactivated successfully.
Jan 05 21:30:58 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:58.100 108136 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-76ad42c4-a28f-4528-9090-217c5e2d84c8 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 05 21:30:58 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:30:58.100 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[6b434f91-6217-4a45-8417-4468a6f5bfbc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:30:59 compute-0 nova_compute[186018]: 2026-01-05 21:30:59.143 186022 DEBUG nova.network.neutron [req-389bb025-66b2-498c-b19b-c3a4b0ca249b req-28f71f6a-c736-49ae-8ed1-8d11659c73ba 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Updated VIF entry in instance network info cache for port 7233cede-206c-45d2-9447-e0c1aafe27d2. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:30:59 compute-0 nova_compute[186018]: 2026-01-05 21:30:59.144 186022 DEBUG nova.network.neutron [req-389bb025-66b2-498c-b19b-c3a4b0ca249b req-28f71f6a-c736-49ae-8ed1-8d11659c73ba 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Updating instance_info_cache with network_info: [{"id": "7233cede-206c-45d2-9447-e0c1aafe27d2", "address": "fa:16:3e:4e:50:51", "network": {"id": "76ad42c4-a28f-4528-9090-217c5e2d84c8", "bridge": "br-int", "label": "tempest-ServersTestJSON-1037725494-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.13", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.249", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5ed80fade1274d8785b48dcf02608341", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap7233cede-20", "ovs_interfaceid": "7233cede-206c-45d2-9447-e0c1aafe27d2", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:30:59 compute-0 nova_compute[186018]: 2026-01-05 21:30:59.496 186022 DEBUG oslo_concurrency.lockutils [req-389bb025-66b2-498c-b19b-c3a4b0ca249b req-28f71f6a-c736-49ae-8ed1-8d11659c73ba 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-c5df5b36-6b5f-4e8d-b9db-aa96dc06de77" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:30:59 compute-0 podman[202426]: time="2026-01-05T21:30:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:30:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:30:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:30:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:30:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4371 "" "Go-http-client/1.1"
Jan 05 21:30:59 compute-0 nova_compute[186018]: 2026-01-05 21:30:59.821 186022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767648644.8200014, 55d782b9-fb70-40e6-b501-16b69cd9a3e1 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:30:59 compute-0 nova_compute[186018]: 2026-01-05 21:30:59.822 186022 INFO nova.compute.manager [-] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] VM Stopped (Lifecycle Event)
Jan 05 21:30:59 compute-0 nova_compute[186018]: 2026-01-05 21:30:59.856 186022 DEBUG nova.compute.manager [None req-083b406b-c856-4f9f-a2f6-a2b94316442f - - - - - -] [instance: 55d782b9-fb70-40e6-b501-16b69cd9a3e1] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:31:01 compute-0 openstack_network_exporter[205720]: ERROR   21:31:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:31:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:31:01 compute-0 openstack_network_exporter[205720]: ERROR   21:31:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:31:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:31:01 compute-0 nova_compute[186018]: 2026-01-05 21:31:01.659 186022 DEBUG nova.network.neutron [-] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:31:01 compute-0 nova_compute[186018]: 2026-01-05 21:31:01.682 186022 INFO nova.compute.manager [-] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Took 3.59 seconds to deallocate network for instance.
Jan 05 21:31:01 compute-0 podman[251511]: 2026-01-05 21:31:01.730683873 +0000 UTC m=+0.077377178 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, vcs-type=git, distribution-scope=public, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, container_name=kepler, build-date=2024-09-18T21:23:30, vendor=Red Hat, Inc., release-0.7.12=, io.buildah.version=1.29.0, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9)
Jan 05 21:31:01 compute-0 podman[251512]: 2026-01-05 21:31:01.732931712 +0000 UTC m=+0.072940531 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true)
Jan 05 21:31:01 compute-0 nova_compute[186018]: 2026-01-05 21:31:01.759 186022 DEBUG oslo_concurrency.lockutils [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:31:01 compute-0 nova_compute[186018]: 2026-01-05 21:31:01.760 186022 DEBUG oslo_concurrency.lockutils [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:31:02 compute-0 nova_compute[186018]: 2026-01-05 21:31:02.162 186022 DEBUG nova.compute.provider_tree [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:31:02 compute-0 nova_compute[186018]: 2026-01-05 21:31:02.191 186022 DEBUG nova.compute.manager [req-81c90279-bf5f-411d-8086-e0abe646c8ce req-ec308172-53f5-4c94-a05d-874908c5f5fd 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Received event network-vif-deleted-7233cede-206c-45d2-9447-e0c1aafe27d2 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:31:02 compute-0 nova_compute[186018]: 2026-01-05 21:31:02.230 186022 DEBUG nova.scheduler.client.report [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:31:02 compute-0 nova_compute[186018]: 2026-01-05 21:31:02.320 186022 DEBUG oslo_concurrency.lockutils [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.560s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:31:02 compute-0 nova_compute[186018]: 2026-01-05 21:31:02.354 186022 INFO nova.scheduler.client.report [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Deleted allocations for instance c5df5b36-6b5f-4e8d-b9db-aa96dc06de77
Jan 05 21:31:02 compute-0 nova_compute[186018]: 2026-01-05 21:31:02.507 186022 DEBUG oslo_concurrency.lockutils [None req-6b211e16-4273-40d0-b02e-42bbcd315ee9 8a7e00bbed09469a93a4c03517990c2b 5ed80fade1274d8785b48dcf02608341 - - default default] Lock "c5df5b36-6b5f-4e8d-b9db-aa96dc06de77" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 4.878s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:31:02 compute-0 nova_compute[186018]: 2026-01-05 21:31:02.828 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:02 compute-0 nova_compute[186018]: 2026-01-05 21:31:02.947 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:05 compute-0 podman[251549]: 2026-01-05 21:31:05.730279284 +0000 UTC m=+0.077116821 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.787 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.788 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.788 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.789 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.795 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 62f57876-af2d-4771-bffd-c87b7755cc5c from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7bc0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:31:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:07.797 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/62f57876-af2d-4771-bffd-c87b7755cc5c -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f276ecb8e60cef1797549a0d2bcc21ef3546f9ad65f5da0e31c0a93bf2cbb910" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 05 21:31:07 compute-0 nova_compute[186018]: 2026-01-05 21:31:07.830 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:07 compute-0 nova_compute[186018]: 2026-01-05 21:31:07.950 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:09 compute-0 ovn_controller[98229]: 2026-01-05T21:31:09Z|00012|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:d3:0d:bf 10.100.0.6
Jan 05 21:31:09 compute-0 ovn_controller[98229]: 2026-01-05T21:31:09Z|00013|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:d3:0d:bf 10.100.0.6
Jan 05 21:31:12 compute-0 nova_compute[186018]: 2026-01-05 21:31:12.832 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:12 compute-0 nova_compute[186018]: 2026-01-05 21:31:12.900 186022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767648657.8993833, c5df5b36-6b5f-4e8d-b9db-aa96dc06de77 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:31:12 compute-0 nova_compute[186018]: 2026-01-05 21:31:12.901 186022 INFO nova.compute.manager [-] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] VM Stopped (Lifecycle Event)
Jan 05 21:31:12 compute-0 nova_compute[186018]: 2026-01-05 21:31:12.952 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:13 compute-0 podman[251584]: 2026-01-05 21:31:13.746358739 +0000 UTC m=+0.088220384 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, config_id=openstack_network_exporter, io.openshift.expose-services=, io.openshift.tags=minimal rhel9, architecture=x86_64, build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, distribution-scope=public, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., managed_by=edpm_ansible, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 05 21:31:13 compute-0 podman[251583]: 2026-01-05 21:31:13.799305603 +0000 UTC m=+0.145175943 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, config_id=ovn_controller, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:31:14 compute-0 nova_compute[186018]: 2026-01-05 21:31:14.503 186022 DEBUG nova.compute.manager [None req-b31eaeaf-6c08-4359-8236-34e437d04c6e - - - - - -] [instance: c5df5b36-6b5f-4e8d-b9db-aa96dc06de77] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:31:17 compute-0 nova_compute[186018]: 2026-01-05 21:31:17.836 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:17 compute-0 nova_compute[186018]: 2026-01-05 21:31:17.954 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.139 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1995 Content-Type: application/json Date: Mon, 05 Jan 2026 21:31:07 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-09cb2b0e-0daa-4b8f-a3c9-736baedfd75c x-openstack-request-id: req-09cb2b0e-0daa-4b8f-a3c9-736baedfd75c _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.140 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "62f57876-af2d-4771-bffd-c87b7755cc5c", "name": "tempest-AttachInterfacesUnderV243Test-server-306597775", "status": "ACTIVE", "tenant_id": "e0899289c7dd4631b4fa69150a914123", "user_id": "168ad639a6ed41c8bd954c434807ef6c", "metadata": {}, "hostId": "c3f8712f401137fbbdc6483d36c041bcfcf3dfa8c8dce0a58aba2f1b", "image": {"id": "ebb2027f-05a6-465a-af75-b7da40a91332", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ebb2027f-05a6-465a-af75-b7da40a91332"}]}, "flavor": {"id": "ce1138a2-4b82-4664-8860-711a956c0882", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/ce1138a2-4b82-4664-8860-711a956c0882"}]}, "created": "2026-01-05T21:30:22Z", "updated": "2026-01-05T21:30:33Z", "addresses": {"tempest-AttachInterfacesUnderV243Test-1372767109-network": [{"version": 4, "addr": "10.100.0.6", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d3:0d:bf"}, {"version": 4, "addr": "192.168.122.236", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:d3:0d:bf"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/62f57876-af2d-4771-bffd-c87b7755cc5c"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/62f57876-af2d-4771-bffd-c87b7755cc5c"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-1556320060", "OS-SRV-USG:launched_at": "2026-01-05T21:30:33.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--203854283"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000006", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.140 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/62f57876-af2d-4771-bffd-c87b7755cc5c used request id req-09cb2b0e-0daa-4b8f-a3c9-736baedfd75c request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.141 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '62f57876-af2d-4771-bffd-c87b7755cc5c', 'name': 'tempest-AttachInterfacesUnderV243Test-server-306597775', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e0899289c7dd4631b4fa69150a914123', 'user_id': '168ad639a6ed41c8bd954c434807ef6c', 'hostId': 'c3f8712f401137fbbdc6483d36c041bcfcf3dfa8c8dce0a58aba2f1b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.142 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.142 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.142 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.142 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.143 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.143 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.143 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.143 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.143 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.144 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.144 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:31:18.142637) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.145 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:31:18.144344) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.150 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 62f57876-af2d-4771-bffd-c87b7755cc5c / tapa6acaedc-5f inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.151 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.151 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.152 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.152 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.152 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.152 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.152 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.152 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.153 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.153 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.153 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.153 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.153 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.154 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.154 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.154 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.155 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.155 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:31:18.152650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.155 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.155 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:31:18.154000) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.155 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.155 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.156 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:31:18.155503) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.156 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.156 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.156 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.156 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.157 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes volume: 3320 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.157 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.158 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.158 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.158 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.159 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.159 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.159 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.159 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:31:18.156852) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.159 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.159 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:31:18.158427) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.160 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.160 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.160 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-306597775>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-306597775>]
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.160 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-05T21:31:18.160047) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.160 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.161 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.161 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.161 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.161 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.162 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.162 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.162 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:31:18.161384) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.162 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.162 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.162 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.163 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.163 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:31:18.162839) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.163 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.164 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.164 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.164 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.164 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.164 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.165 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:31:18.164419) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.184 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.184 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.185 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.185 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.185 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.185 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.185 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.186 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.186 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.186 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.187 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.187 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.187 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.188 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.188 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:31:18.186096) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.188 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.188 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.188 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-306597775>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-AttachInterfacesUnderV243Test-server-306597775>]
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.189 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.189 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.189 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.189 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.189 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.189 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.190 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.190 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.190 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.191 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.191 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.191 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.191 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-05T21:31:18.188328) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:31:18.189789) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.192 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:31:18.191276) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.221 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/memory.usage volume: 42.48828125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.222 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.223 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.223 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.223 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.224 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.224 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.224 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.225 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:31:18.224426) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.226 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.226 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.226 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.227 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.227 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.227 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.228 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.228 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:31:18.227769) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.229 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.229 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.230 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.230 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.230 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.231 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.231 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.231 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:31:18.231412) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.283 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 31029760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.284 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.284 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.284 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.284 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.285 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.285 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.285 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.285 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes volume: 4311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.285 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.286 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.286 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.286 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.286 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.286 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 519177861 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.286 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:31:18.285281) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.286 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 51692234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.287 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.287 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.287 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.287 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:31:18.286492) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.287 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.287 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.287 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.287 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 1138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.287 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:31:18.287714) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.288 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.288 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.288 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.288 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.288 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.288 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.288 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.289 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.289 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.289 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.289 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.289 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:31:18.288834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.289 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.290 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.290 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.290 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 72921088 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.291 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.293 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:31:18.290052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.294 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.294 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.294 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.295 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.295 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.295 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.295 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/cpu volume: 33760000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.295 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.295 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.295 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.296 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.296 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.296 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.296 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 13496578517 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.296 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.296 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.296 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.297 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.297 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.297 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.297 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.297 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.297 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.297 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:31:18.295164) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:31:18.296197) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.298 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:31:18.297379) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.298 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.299 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.300 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:31:18.301 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:31:18 compute-0 podman[251630]: 2026-01-05 21:31:18.736481795 +0000 UTC m=+0.068036992 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:31:18 compute-0 podman[251629]: 2026-01-05 21:31:18.736363732 +0000 UTC m=+0.068244877 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 05 21:31:19 compute-0 nova_compute[186018]: 2026-01-05 21:31:19.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:31:19 compute-0 nova_compute[186018]: 2026-01-05 21:31:19.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:31:19 compute-0 nova_compute[186018]: 2026-01-05 21:31:19.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:31:20 compute-0 nova_compute[186018]: 2026-01-05 21:31:20.523 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:31:20 compute-0 nova_compute[186018]: 2026-01-05 21:31:20.524 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:31:20 compute-0 nova_compute[186018]: 2026-01-05 21:31:20.524 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:31:20 compute-0 nova_compute[186018]: 2026-01-05 21:31:20.525 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62f57876-af2d-4771-bffd-c87b7755cc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:31:20 compute-0 nova_compute[186018]: 2026-01-05 21:31:20.800 186022 DEBUG nova.objects.instance [None req-4cd44d6d-57af-4f8e-aff9-bf54372ad6d7 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Lazy-loading 'flavor' on Instance uuid 62f57876-af2d-4771-bffd-c87b7755cc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:31:20 compute-0 nova_compute[186018]: 2026-01-05 21:31:20.902 186022 DEBUG oslo_concurrency.lockutils [None req-4cd44d6d-57af-4f8e-aff9-bf54372ad6d7 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Acquiring lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:31:22 compute-0 nova_compute[186018]: 2026-01-05 21:31:22.837 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:22 compute-0 nova_compute[186018]: 2026-01-05 21:31:22.956 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:25 compute-0 podman[251668]: 2026-01-05 21:31:25.712692914 +0000 UTC m=+0.064096479 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:31:27 compute-0 nova_compute[186018]: 2026-01-05 21:31:27.840 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:27 compute-0 nova_compute[186018]: 2026-01-05 21:31:27.959 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:28 compute-0 nova_compute[186018]: 2026-01-05 21:31:28.939 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updating instance_info_cache with network_info: [{"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:31:29 compute-0 nova_compute[186018]: 2026-01-05 21:31:29.400 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:31:29 compute-0 nova_compute[186018]: 2026-01-05 21:31:29.401 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:31:29 compute-0 nova_compute[186018]: 2026-01-05 21:31:29.402 186022 DEBUG oslo_concurrency.lockutils [None req-4cd44d6d-57af-4f8e-aff9-bf54372ad6d7 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Acquired lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:31:29 compute-0 nova_compute[186018]: 2026-01-05 21:31:29.405 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:31:29 compute-0 nova_compute[186018]: 2026-01-05 21:31:29.405 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:31:29 compute-0 nova_compute[186018]: 2026-01-05 21:31:29.406 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:31:29 compute-0 nova_compute[186018]: 2026-01-05 21:31:29.406 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:31:29 compute-0 nova_compute[186018]: 2026-01-05 21:31:29.406 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:31:29 compute-0 nova_compute[186018]: 2026-01-05 21:31:29.407 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:31:29 compute-0 nova_compute[186018]: 2026-01-05 21:31:29.407 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 21:31:29 compute-0 nova_compute[186018]: 2026-01-05 21:31:29.551 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:31:29 compute-0 nova_compute[186018]: 2026-01-05 21:31:29.552 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:31:29 compute-0 nova_compute[186018]: 2026-01-05 21:31:29.552 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:31:29 compute-0 podman[202426]: time="2026-01-05T21:31:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:31:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:31:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:31:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:31:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4361 "" "Go-http-client/1.1"
Jan 05 21:31:30 compute-0 nova_compute[186018]: 2026-01-05 21:31:30.199 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:31:30 compute-0 nova_compute[186018]: 2026-01-05 21:31:30.199 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:31:30 compute-0 nova_compute[186018]: 2026-01-05 21:31:30.200 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:31:30 compute-0 nova_compute[186018]: 2026-01-05 21:31:30.200 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:31:30 compute-0 ovn_controller[98229]: 2026-01-05T21:31:30Z|00104|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:31:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:31:30.350 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=12, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=11) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:31:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:31:30.351 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 1 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:31:30 compute-0 nova_compute[186018]: 2026-01-05 21:31:30.412 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:31:30 compute-0 nova_compute[186018]: 2026-01-05 21:31:30.426 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:30 compute-0 nova_compute[186018]: 2026-01-05 21:31:30.470 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:31:30 compute-0 nova_compute[186018]: 2026-01-05 21:31:30.472 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:31:30 compute-0 nova_compute[186018]: 2026-01-05 21:31:30.538 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:31:30 compute-0 nova_compute[186018]: 2026-01-05 21:31:30.903 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:31:30 compute-0 nova_compute[186018]: 2026-01-05 21:31:30.906 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5171MB free_disk=72.35022735595703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:31:30 compute-0 nova_compute[186018]: 2026-01-05 21:31:30.906 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:31:30 compute-0 nova_compute[186018]: 2026-01-05 21:31:30.907 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:31:31 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:31:31.352 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '12'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:31:31 compute-0 openstack_network_exporter[205720]: ERROR   21:31:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:31:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:31:31 compute-0 openstack_network_exporter[205720]: ERROR   21:31:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:31:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:31:31 compute-0 nova_compute[186018]: 2026-01-05 21:31:31.742 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:32 compute-0 podman[251700]: 2026-01-05 21:31:32.717507004 +0000 UTC m=+0.071670808 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 05 21:31:32 compute-0 podman[251699]: 2026-01-05 21:31:32.734730627 +0000 UTC m=+0.081039944 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., maintainer=Red Hat, Inc., version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-type=git, com.redhat.component=ubi9-container, config_id=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, architecture=x86_64, build-date=2024-09-18T21:23:30, container_name=kepler, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:31:32 compute-0 nova_compute[186018]: 2026-01-05 21:31:32.844 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:32 compute-0 nova_compute[186018]: 2026-01-05 21:31:32.961 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:33 compute-0 nova_compute[186018]: 2026-01-05 21:31:33.941 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:31:33 compute-0 nova_compute[186018]: 2026-01-05 21:31:33.942 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:31:33 compute-0 nova_compute[186018]: 2026-01-05 21:31:33.942 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:31:34 compute-0 nova_compute[186018]: 2026-01-05 21:31:34.244 186022 DEBUG nova.network.neutron [None req-4cd44d6d-57af-4f8e-aff9-bf54372ad6d7 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 21:31:34 compute-0 nova_compute[186018]: 2026-01-05 21:31:34.368 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:31:34 compute-0 nova_compute[186018]: 2026-01-05 21:31:34.441 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:31:34 compute-0 nova_compute[186018]: 2026-01-05 21:31:34.460 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:31:34 compute-0 nova_compute[186018]: 2026-01-05 21:31:34.461 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 3.553s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:31:34 compute-0 nova_compute[186018]: 2026-01-05 21:31:34.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:31:34 compute-0 nova_compute[186018]: 2026-01-05 21:31:34.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 21:31:34 compute-0 nova_compute[186018]: 2026-01-05 21:31:34.475 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 05 21:31:36 compute-0 nova_compute[186018]: 2026-01-05 21:31:36.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:31:36 compute-0 nova_compute[186018]: 2026-01-05 21:31:36.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:31:36 compute-0 podman[251739]: 2026-01-05 21:31:36.778653912 +0000 UTC m=+0.118196393 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 05 21:31:37 compute-0 nova_compute[186018]: 2026-01-05 21:31:37.757 186022 DEBUG neutronclient.v2_0.client [None req-4cd44d6d-57af-4f8e-aff9-bf54372ad6d7 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Error message: {"message": "The server is currently unavailable. Please try again at a later time.<br /><br />\nThe Keystone service is temporarily unavailable.\n\n", "code": "503 Service Unavailable", "title": "Service Unavailable"} _handle_fault_response /usr/lib/python3.9/site-packages/neutronclient/v2_0/client.py:262
Jan 05 21:31:37 compute-0 nova_compute[186018]: 2026-01-05 21:31:37.760 186022 DEBUG nova.network.neutron [None req-4cd44d6d-57af-4f8e-aff9-bf54372ad6d7 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Unable to update port a6acaedc-5f9d-4aca-9e6b-c69623601aca on subnet eb1acedc-6882-4628-98da-5681075c51ca with failure: The server is currently unavailable. Please try again at a later time.<br /><br />
Jan 05 21:31:37 compute-0 nova_compute[186018]: The Keystone service is temporarily unavailable.
Jan 05 21:31:37 compute-0 nova_compute[186018]: 
Jan 05 21:31:37 compute-0 nova_compute[186018]: 
Jan 05 21:31:37 compute-0 nova_compute[186018]: Neutron server returns request_ids: ['req-bd87dd8c-91b7-4dd5-83bc-56258aab1571'] add_fixed_ip_to_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:2107
Jan 05 21:31:37 compute-0 nova_compute[186018]: 2026-01-05 21:31:37.762 186022 DEBUG oslo_concurrency.lockutils [None req-4cd44d6d-57af-4f8e-aff9-bf54372ad6d7 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Releasing lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:31:37 compute-0 nova_compute[186018]: 2026-01-05 21:31:37.846 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:37 compute-0 nova_compute[186018]: 2026-01-05 21:31:37.964 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Error during ComputeManager._cleanup_expired_console_auth_tokens: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 21:31:38 compute-0 nova_compute[186018]: [SQL: SELECT 1]
Jan 05 21:31:38 compute-0 nova_compute[186018]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 21:31:38 compute-0 nova_compute[186018]: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 74, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n", '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/console_auth_token.py", line 182, in clean_expired_console_auths\n    db.console_auth_token_destroy_expired(context)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 4886, in console_auth_token_destroy_expired\n    context.session.query(models.ConsoleAuthToken).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3222, in delete\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 120, in __init__\n    self.dispatch.engine_connect(self, _branch_from is not None)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/event/attr.py", line 334, in __call__\n    fn(*args, **kw)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 84, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1806, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task Traceback (most recent call last):
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_service/periodic_task.py", line 216, in run_periodic_tasks
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task     task(self, context)
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 11282, in _cleanup_expired_console_auth_tokens
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task     objects.ConsoleAuthToken.clean_expired_console_auths(context)
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 175, in wrapper
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task     result = cls.indirection_api.object_class_action_versions(
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 240, in object_class_action_versions
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task     return cctxt.call(context, 'object_class_action_versions',
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task     result = self.transport._send(
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task     return self._driver.send(target, ctxt, message,
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task     raise result
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task [SQL: SELECT 1]
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "pymysql.err.OperationalError: (2013, 'Lost connection to MySQL server during query')\n", '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 74, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1943, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1900, in _execute_context\n    self.dialect.do_execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 736, in do_execute\n    cursor.execute(statement, parameters)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 163, in execute\n    result = self._query(query)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/cursors.py", line 321, in _query\n    conn.query(q)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 505, in query\n    self._affected_rows = self._read_query_result(unbuffered=unbuffered)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 724, in _read_query_result\n    result.read()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 1069, in read\n    first_packet = self.connection._read_packet()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 646, in _read_packet\n    packet_header = self._read_bytes(4)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 698, in _read_bytes\n    raise err.OperationalError(\n', "oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2013, 'Lost connection to MySQL server during query')\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n", '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 184, in wrapper\n    result = fn(cls, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/console_auth_token.py", line 182, in clean_expired_console_auths\n    db.console_auth_token_destroy_expired(context)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 4886, in console_auth_token_destroy_expired\n    context.session.query(models.ConsoleAuthToken).\\\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 3222, in delete\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 120, in __init__\n    self.dispatch.engine_connect(self, _branch_from is not None)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/event/attr.py", line 334, in __call__\n    fn(*args, **kw)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/engines.py", line 84, in _connect_ping_listener\n    connection.scalar(select(1))\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1262, in scalar\n    return self.execute(object_, *multiparams, **params).scalar()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1380, in execute\n    return meth(self, multiparams, params, _EMPTY_EXECUTION_OPTS)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/sql/elements.py", line 334, in _execute_on_connection\n    return connection._execute_clauseelement(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1572, in _execute_clauseelement\n    ret = self._execute_context(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1806, in _execute_context\n    self._handle_dbapi_exception(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2122, in _handle_dbapi_exception\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 1798, in _execute_context\n    conn = self._revalidate_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 646, in _revalidate_connection\n    self._dbapi_connection = self.engine.raw_connection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3368, in _wrap_pool_connect\n    util.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n[SQL: SELECT 1]\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task 
Jan 05 21:31:38 compute-0 rsyslogd[237695]: message too long (14559) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-pack [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:31:38 compute-0 rsyslogd[237695]: message too long (14623) with configured size 8096, begin of message is: 2026-01-05 21:31:38.090 186022 ERROR oslo_service.periodic_task ['Traceback (mos [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.187 186022 ERROR root [None req-4cd44d6d-57af-4f8e-aff9-bf54372ad6d7 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Original exception being dropped: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function\n    return function(self, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 6552, in add_fixed_ip_to_instance\n    network_info = self.network_api.add_fixed_ip_to_instance(context,\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 165, in wrapper\n    res = f(self, context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 2111, in add_fixed_ip_to_instance\n    raise exception.NetworkNotFoundForInstance(\n', 'nova.exception.NetworkNotFoundForInstance: Network could not be found for instance 62f57876-af2d-4771-bffd-c87b7755cc5c.\n']: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server [None req-4cd44d6d-57af-4f8e-aff9-bf54372ad6d7 168ad639a6ed41c8bd954c434807ef6c e0899289c7dd4631b4fa69150a914123 - - default default] Exception during message handling: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 21:31:38 compute-0 nova_compute[186018]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 21:31:38 compute-0 nova_compute[186018]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance_fault.py", line 76, in create\n    db_fault = db.instance_fault_create(self._context, values)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 3823, in instance_fault_create\n    fault_ref.save(context.session)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/models.py", line 38, in save\n    session.flush()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 3444, in flush\n    self._flush(objects)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 3584, in _flush\n    transaction.rollback(_capture_exception=True)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 3544, in _flush\n    flush_context.execute()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute\n    rec.execute(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute\n    util.preloaded.orm_persistence.save_obj(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 212, in save_obj\n    for (\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 373, in _organize_states_for_save\n    for state, dict_, mapper, connection in _connections_for_states(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 1737, in _connections_for_states\n    connection = uowtransaction.transaction.connection(base_mapper)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 626, in connection\n    return self._connection_for_bind(bind, execution_options)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 735, in _connection_for_bind\n    conn = self._parent._connection_for_bind(bind, execution_options)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 203, in decorated_function
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     return function(self, context, *args, **kwargs)
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 6552, in add_fixed_ip_to_instance
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     network_info = self.network_api.add_fixed_ip_to_instance(context,
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 165, in wrapper
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     res = f(self, context, *args, **kwargs)
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/network/neutron.py", line 2111, in add_fixed_ip_to_instance
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     raise exception.NetworkNotFoundForInstance(
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server nova.exception.NetworkNotFoundForInstance: Network could not be found for instance 62f57876-af2d-4771-bffd-c87b7755cc5c.
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server 
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server During handling of the above exception, another exception occurred:
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server 
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server Traceback (most recent call last):
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/server.py", line 165, in _process_incoming
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     res = self.dispatcher.dispatch(message)
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 309, in dispatch
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     return self._do_dispatch(endpoint, method, ctxt, args)
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/dispatcher.py", line 229, in _do_dispatch
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     result = func(ctxt, **new_args)
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 71, in wrapped
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     _emit_versioned_exception_notification(
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     self.force_reraise()
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     raise self.value
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/exception_wrapper.py", line 63, in wrapped
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     return f(self, context, *args, **kw)
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/manager.py", line 214, in decorated_function
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     compute_utils.add_instance_fault_from_exc(context,
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/compute/utils.py", line 153, in add_instance_fault_from_exc
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     fault_obj.create()
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     updates, result = self.indirection_api.object_action(
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     result = self.transport._send(
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     return self._driver.send(target, ctxt, message,
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server     raise result
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/instance_fault.py", line 76, in create\n    db_fault = db.instance_fault_create(self._context, values)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 3823, in instance_fault_create\n    fault_ref.save(context.session)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/sqlalchemy/models.py", line 38, in save\n    session.flush()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 3444, in flush\n    self._flush(objects)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 3584, in _flush\n    transaction.rollback(_capture_exception=True)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 3544, in _flush\n    flush_context.execute()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 456, in execute\n    rec.execute(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/unitofwork.py", line 630, in execute\n    util.preloaded.orm_persistence.save_obj(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 212, in save_obj\n    for (\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 373, in _organize_states_for_save\n    for state, dict_, mapper, connection in _connections_for_states(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/persistence.py", line 1737, in _connections_for_states\n    connection = uowtransaction.transaction.connection(base_mapper)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 626, in connection\n    return self._connection_for_bind(bind, execution_options)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 735, in _connection_for_bind\n    conn = self._parent._connection_for_bind(bind, execution_options)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 653, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n
Jan 05 21:31:38 compute-0 nova_compute[186018]:     compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 21:31:38 compute-0 nova_compute[186018]: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server 
Jan 05 21:31:38 compute-0 rsyslogd[237695]: message too long (9544) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:31:38 compute-0 rsyslogd[237695]: message too long (8417) with configured size 8096, begin of message is: 2026-01-05 21:31:38.190 186022 ERROR oslo_messaging.rpc.server ['Traceback (most [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 21:31:41 compute-0 nova_compute[186018]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 21:31:41 compute-0 nova_compute[186018]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db     raise result
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 21:31:41 compute-0 nova_compute[186018]: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db 
Jan 05 21:31:41 compute-0 rsyslogd[237695]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:31:41 compute-0 rsyslogd[237695]: message too long (9052) with configured size 8096, begin of message is: 2026-01-05 21:31:41.211 186022 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:31:42 compute-0 nova_compute[186018]: 2026-01-05 21:31:42.849 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:31:42.870 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:31:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:31:42.870 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:31:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:31:42.871 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:31:42 compute-0 nova_compute[186018]: 2026-01-05 21:31:42.967 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:44 compute-0 podman[251757]: 2026-01-05 21:31:44.756149102 +0000 UTC m=+0.085305797 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, com.redhat.component=ubi9-minimal-container, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, url=https://catalog.redhat.com/en/search?searchType=containers, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 05 21:31:44 compute-0 podman[251756]: 2026-01-05 21:31:44.769184305 +0000 UTC m=+0.118194003 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 05 21:31:47 compute-0 nova_compute[186018]: 2026-01-05 21:31:47.851 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:47 compute-0 nova_compute[186018]: 2026-01-05 21:31:47.969 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:49 compute-0 podman[251803]: 2026-01-05 21:31:49.735363233 +0000 UTC m=+0.078858237 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:31:49 compute-0 podman[251802]: 2026-01-05 21:31:49.761171423 +0000 UTC m=+0.103446205 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 21:31:51 compute-0 nova_compute[186018]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 21:31:51 compute-0 nova_compute[186018]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db     raise result
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 21:31:51 compute-0 nova_compute[186018]: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db 
Jan 05 21:31:51 compute-0 rsyslogd[237695]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:31:51 compute-0 rsyslogd[237695]: message too long (9052) with configured size 8096, begin of message is: 2026-01-05 21:31:51.217 186022 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:31:52 compute-0 nova_compute[186018]: 2026-01-05 21:31:52.855 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:52 compute-0 nova_compute[186018]: 2026-01-05 21:31:52.972 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:56 compute-0 podman[251843]: 2026-01-05 21:31:56.758096233 +0000 UTC m=+0.111519037 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 21:31:57 compute-0 nova_compute[186018]: 2026-01-05 21:31:57.859 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:57 compute-0 nova_compute[186018]: 2026-01-05 21:31:57.975 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:31:59 compute-0 podman[202426]: time="2026-01-05T21:31:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:31:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:31:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:31:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:31:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4364 "" "Go-http-client/1.1"
Jan 05 21:32:01 compute-0 openstack_network_exporter[205720]: ERROR   21:32:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:32:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:32:01 compute-0 openstack_network_exporter[205720]: ERROR   21:32:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:32:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 21:32:01 compute-0 nova_compute[186018]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 21:32:01 compute-0 nova_compute[186018]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db     raise result
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 21:32:01 compute-0 nova_compute[186018]: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db 
Jan 05 21:32:01 compute-0 rsyslogd[237695]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:32:01 compute-0 rsyslogd[237695]: message too long (9052) with configured size 8096, begin of message is: 2026-01-05 21:32:01.664 186022 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:32:02 compute-0 nova_compute[186018]: 2026-01-05 21:32:02.861 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:02 compute-0 nova_compute[186018]: 2026-01-05 21:32:02.979 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:03 compute-0 ovn_controller[98229]: 2026-01-05T21:32:03Z|00105|memory_trim|INFO|Detected inactivity (last active 30004 ms ago): trimming memory
Jan 05 21:32:03 compute-0 podman[251867]: 2026-01-05 21:32:03.75600858 +0000 UTC m=+0.101204065 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 05 21:32:03 compute-0 podman[251866]: 2026-01-05 21:32:03.765519781 +0000 UTC m=+0.104836442 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.expose-services=, vendor=Red Hat, Inc., release=1214.1726694543, distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, build-date=2024-09-18T21:23:30, name=ubi9, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., config_id=kepler, release-0.7.12=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler)
Jan 05 21:32:07 compute-0 podman[251905]: 2026-01-05 21:32:07.709681001 +0000 UTC m=+0.060185746 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.license=GPLv2, tcib_managed=true, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 05 21:32:07 compute-0 nova_compute[186018]: 2026-01-05 21:32:07.864 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:07 compute-0 nova_compute[186018]: 2026-01-05 21:32:07.981 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db [-] Unexpected error while reporting service status: oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 21:32:11 compute-0 nova_compute[186018]: (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 21:32:11 compute-0 nova_compute[186018]: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db Traceback (most recent call last):
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/servicegroup/drivers/db.py", line 92, in _report_state
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db     service.service_ref.save()
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 209, in wrapper
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db     updates, result = self.indirection_api.object_action(
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/nova/conductor/rpcapi.py", line 247, in object_action
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db     return cctxt.call(context, 'object_action', objinst=objinst,
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/rpc/client.py", line 190, in call
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db     result = self.transport._send(
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/transport.py", line 123, in _send
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db     return self._driver.send(target, ctxt, message,
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 689, in send
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db     return self._send(target, ctxt, message, wait_for_reply, timeout,
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db   File "/usr/lib/python3.9/site-packages/oslo_messaging/_drivers/amqpdriver.py", line 681, in _send
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db     raise result
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db oslo_messaging.rpc.client.RemoteError: Remote error: DBConnectionError (pymysql.err.OperationalError) (2003, "Can't connect to MySQL server on 'openstack-cell1.openstack.svc' ([Errno 111] ECONNREFUSED)")
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db (Background on this error at: https://sqlalche.me/e/14/e3q8)
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 569, in connect\n    sock = socket.create_connection(\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 63, in create_connection\n    raise err\n', '  File "/usr/lib/python3.9/site-packages/eventlet/green/socket.py", line 53, in create_connection\n    sock.connect(sa)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 270, in connect\n    socket_checkerr(fd)\n', '  File "/usr/lib/python3.9/site-packages/eventlet/greenio/base.py", line 54, in socket_checkerr\n    raise socket.error(err, errno.errorcode[err])\n', 'ConnectionRefusedError: [Errno 111] ECONNREFUSED\n', '\nDuring handling of the above exception, another exception occurred:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'pymysql.err.OperationalError: (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n', '\nThe above exception was the direct cause of the following exception:\n\n', 'Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packages/nova/conductor/manager.py", line 142, in _object_dispatch\n    return getattr(target, method)(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/oslo_versionedobjects/base.py", line 226, in wrapper\n    return fn(self, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/objects/service.py", line 505, in save\n    db_service = db.service_update(self._context, self.id, updates)\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 154, in wrapper\n    ectxt.value = e.inner_exc\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 227, in __exit__\n    self.force_reraise()\n', '  File "/usr/lib/python3.9/site-packages/oslo_utils/excutils.py", line 200, in force_reraise\n    raise self.value\n', '  File "/usr/lib/python3.9/site-packages/oslo_db/api.py", line 142, in wrapper\n    return f(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 207, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 563, in service_update\n    service_ref = service_get(context, service_id)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 224, in wrapper\n    return f(context, *args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/nova/db/main/api.py", line 398, in service_get\n    result = query.first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2824, in first\n    return self.limit(1)._iter().first()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/query.py", line 2916, in _iter\n    result = self.session.execute(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1713, in execute\n    conn = self._connection_for_bind(bind)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 1552, in _connection_for_bind\n    return self._transaction._connection_for_bind(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/orm/session.py", line 747, in _connection_for_bind\n    conn = bind.connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3315, in connect\n    return self._connection_cls(self, close_with_result=close_with_result)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 96, in __init__\n    else engine.raw_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3394, in raw_connection\n    return self._wrap_pool_connect(self.pool.connect, _connection)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3364, in _wrap_pool_connect\n    Connection._handle_dbapi_exception_noconnection(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 2196, in _handle_dbapi_exception_noconnection\n    util.raise_(newraise, with_traceback=exc_info[2], from_=e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/base.py", line 3361, in _wrap_pool_connect\n    return fn()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 325, in connect\n    return _ConnectionFairy._checkout(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 888, in _checkout\n    fairy = _ConnectionRecord.checkout(pool)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 496, in checkout\n    rec._checkin_failed(err, _fairy_was_created=False)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 493, in checkout\n    dbapi_connection = rec.get_connection()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 624, in get_connection\n    self.__connect()\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 685, in __connect\n    pool.logger.debug("Error on connect(): %s", e)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/langhelpers.py", line 70, in __exit__\n    compat.raise_(\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/util/compat.py", line 211, in raise_\n    raise exception\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/pool/base.py", line 680, in __connect\n    self.dbapi_connection = connection = pool._invoke_creator(self)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/create.py", line 578, in connect\n    return dialect.connect(*cargs, **cparams)\n', '  File "/usr/lib64/python3.9/site-packages/sqlalchemy/engine/default.py", line 598, in connect\n    return self.dbapi.connect(*cargs, **cparams)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/__init__.py", line 94, in Connect\n    return Connection(*args, **kwargs)\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 327, in __init__\n    self.connect()\n', '  File "/usr/lib/python3.9/site-packages/pymysql/connections.py", line 619, in connect\n    raise exc\n', 'oslo_db.exception.DBConnectionError: (pymysql.err.OperationalError) (2003, "Can\'t connect to MySQL server on \'openstack-cell1.openstack.svc\' ([Errno 111] ECONNREFUSED)")\n(Background on this error at: https://sqlalche.me/e/14/e3q8)\n'].
Jan 05 21:32:11 compute-0 nova_compute[186018]: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db 
Jan 05 21:32:11 compute-0 rsyslogd[237695]: message too long (8986) with configured size 8096, begin of message is: ['Traceback (most recent call last):\n', '  File "/usr/lib/python3.9/site-packag [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:32:11 compute-0 rsyslogd[237695]: message too long (9052) with configured size 8096, begin of message is: 2026-01-05 21:32:11.209 186022 ERROR nova.servicegroup.drivers.db ['Traceback (m [v8.2510.0-2.el9 try https://www.rsyslog.com/e/2445 ]
Jan 05 21:32:12 compute-0 nova_compute[186018]: 2026-01-05 21:32:12.867 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:12 compute-0 nova_compute[186018]: 2026-01-05 21:32:12.983 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:15 compute-0 podman[251927]: 2026-01-05 21:32:15.757404509 +0000 UTC m=+0.106893395 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=openstack_network_exporter, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.tags=minimal rhel9, version=9.6, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.openshift.expose-services=, vendor=Red Hat, Inc., distribution-scope=public, name=ubi9-minimal, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 05 21:32:15 compute-0 podman[251926]: 2026-01-05 21:32:15.761509487 +0000 UTC m=+0.113293144 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 21:32:17 compute-0 nova_compute[186018]: 2026-01-05 21:32:17.870 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:17 compute-0 nova_compute[186018]: 2026-01-05 21:32:17.985 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:20 compute-0 podman[251972]: 2026-01-05 21:32:20.716326815 +0000 UTC m=+0.065773873 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 05 21:32:20 compute-0 podman[251973]: 2026-01-05 21:32:20.717157057 +0000 UTC m=+0.061365857 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:32:21 compute-0 nova_compute[186018]: 2026-01-05 21:32:21.281 186022 INFO nova.servicegroup.drivers.db [-] Recovered from being unable to report status.
Jan 05 21:32:22 compute-0 nova_compute[186018]: 2026-01-05 21:32:22.873 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:22 compute-0 nova_compute[186018]: 2026-01-05 21:32:22.988 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:23 compute-0 nova_compute[186018]: 2026-01-05 21:32:23.097 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:32:23 compute-0 nova_compute[186018]: 2026-01-05 21:32:23.097 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:32:23 compute-0 nova_compute[186018]: 2026-01-05 21:32:23.098 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:32:23 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:23.419 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=13, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=12) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:32:23 compute-0 nova_compute[186018]: 2026-01-05 21:32:23.420 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:23 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:23.420 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:32:23 compute-0 nova_compute[186018]: 2026-01-05 21:32:23.540 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:32:23 compute-0 nova_compute[186018]: 2026-01-05 21:32:23.541 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:32:23 compute-0 nova_compute[186018]: 2026-01-05 21:32:23.541 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:32:23 compute-0 nova_compute[186018]: 2026-01-05 21:32:23.542 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62f57876-af2d-4771-bffd-c87b7755cc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:32:25 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:25.422 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '13'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:32:25 compute-0 nova_compute[186018]: 2026-01-05 21:32:25.666 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updating instance_info_cache with network_info: [{"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:32:25 compute-0 nova_compute[186018]: 2026-01-05 21:32:25.697 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:32:25 compute-0 nova_compute[186018]: 2026-01-05 21:32:25.697 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:32:25 compute-0 nova_compute[186018]: 2026-01-05 21:32:25.698 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:32:25 compute-0 nova_compute[186018]: 2026-01-05 21:32:25.699 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:32:27 compute-0 nova_compute[186018]: 2026-01-05 21:32:27.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:32:27 compute-0 nova_compute[186018]: 2026-01-05 21:32:27.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:32:27 compute-0 podman[252013]: 2026-01-05 21:32:27.712688342 +0000 UTC m=+0.060167425 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 21:32:27 compute-0 nova_compute[186018]: 2026-01-05 21:32:27.874 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:27 compute-0 nova_compute[186018]: 2026-01-05 21:32:27.990 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:28 compute-0 nova_compute[186018]: 2026-01-05 21:32:28.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:32:28 compute-0 nova_compute[186018]: 2026-01-05 21:32:28.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:32:29 compute-0 nova_compute[186018]: 2026-01-05 21:32:29.459 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:32:29 compute-0 nova_compute[186018]: 2026-01-05 21:32:29.485 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:32:29 compute-0 nova_compute[186018]: 2026-01-05 21:32:29.485 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:32:29 compute-0 nova_compute[186018]: 2026-01-05 21:32:29.486 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:32:29 compute-0 nova_compute[186018]: 2026-01-05 21:32:29.486 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:32:29 compute-0 nova_compute[186018]: 2026-01-05 21:32:29.558 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:29 compute-0 nova_compute[186018]: 2026-01-05 21:32:29.571 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:32:29 compute-0 nova_compute[186018]: 2026-01-05 21:32:29.631 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:32:29 compute-0 nova_compute[186018]: 2026-01-05 21:32:29.633 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:32:29 compute-0 nova_compute[186018]: 2026-01-05 21:32:29.696 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:32:29 compute-0 podman[202426]: time="2026-01-05T21:32:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:32:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:32:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:32:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:32:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4367 "" "Go-http-client/1.1"
Jan 05 21:32:30 compute-0 nova_compute[186018]: 2026-01-05 21:32:30.064 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:32:30 compute-0 nova_compute[186018]: 2026-01-05 21:32:30.066 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5169MB free_disk=72.35022735595703GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:32:30 compute-0 nova_compute[186018]: 2026-01-05 21:32:30.066 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:32:30 compute-0 nova_compute[186018]: 2026-01-05 21:32:30.067 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:32:30 compute-0 nova_compute[186018]: 2026-01-05 21:32:30.140 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:32:30 compute-0 nova_compute[186018]: 2026-01-05 21:32:30.140 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:32:30 compute-0 nova_compute[186018]: 2026-01-05 21:32:30.141 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:32:30 compute-0 nova_compute[186018]: 2026-01-05 21:32:30.157 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing inventories for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 21:32:30 compute-0 nova_compute[186018]: 2026-01-05 21:32:30.300 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating ProviderTree inventory for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 21:32:30 compute-0 nova_compute[186018]: 2026-01-05 21:32:30.301 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating inventory in ProviderTree for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 21:32:30 compute-0 nova_compute[186018]: 2026-01-05 21:32:30.319 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing aggregate associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 21:32:30 compute-0 nova_compute[186018]: 2026-01-05 21:32:30.342 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing trait associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_NODE,HW_CPU_X86_BMI,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 21:32:30 compute-0 nova_compute[186018]: 2026-01-05 21:32:30.395 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:32:30 compute-0 nova_compute[186018]: 2026-01-05 21:32:30.416 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:32:30 compute-0 nova_compute[186018]: 2026-01-05 21:32:30.417 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:32:30 compute-0 nova_compute[186018]: 2026-01-05 21:32:30.418 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.351s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:32:31 compute-0 openstack_network_exporter[205720]: ERROR   21:32:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:32:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:32:31 compute-0 openstack_network_exporter[205720]: ERROR   21:32:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:32:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:32:31 compute-0 nova_compute[186018]: 2026-01-05 21:32:31.422 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:32:32 compute-0 nova_compute[186018]: 2026-01-05 21:32:32.455 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:32:32 compute-0 nova_compute[186018]: 2026-01-05 21:32:32.877 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:32 compute-0 nova_compute[186018]: 2026-01-05 21:32:32.992 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:34 compute-0 nova_compute[186018]: 2026-01-05 21:32:34.459 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:32:34 compute-0 podman[252042]: 2026-01-05 21:32:34.715376916 +0000 UTC m=+0.066583434 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, distribution-scope=public, io.openshift.expose-services=, version=9.4, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., release=1214.1726694543)
Jan 05 21:32:34 compute-0 podman[252043]: 2026-01-05 21:32:34.744577145 +0000 UTC m=+0.091429498 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 05 21:32:36 compute-0 nova_compute[186018]: 2026-01-05 21:32:36.755 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Acquiring lock "1c4634a9-de38-4683-abb9-3964b285a21c" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:32:36 compute-0 nova_compute[186018]: 2026-01-05 21:32:36.756 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:32:36 compute-0 nova_compute[186018]: 2026-01-05 21:32:36.782 186022 DEBUG nova.compute.manager [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 21:32:36 compute-0 nova_compute[186018]: 2026-01-05 21:32:36.857 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:32:36 compute-0 nova_compute[186018]: 2026-01-05 21:32:36.858 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:32:36 compute-0 nova_compute[186018]: 2026-01-05 21:32:36.865 186022 DEBUG nova.virt.hardware [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 21:32:36 compute-0 nova_compute[186018]: 2026-01-05 21:32:36.865 186022 INFO nova.compute.claims [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Claim successful on node compute-0.ctlplane.example.com
Jan 05 21:32:36 compute-0 nova_compute[186018]: 2026-01-05 21:32:36.990 186022 DEBUG nova.compute.provider_tree [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.005 186022 DEBUG nova.scheduler.client.report [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.029 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.030 186022 DEBUG nova.compute.manager [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.085 186022 DEBUG nova.compute.manager [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.086 186022 DEBUG nova.network.neutron [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.103 186022 INFO nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.119 186022 DEBUG nova.compute.manager [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.239 186022 DEBUG nova.compute.manager [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.241 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.242 186022 INFO nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Creating image(s)
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.243 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Acquiring lock "/var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.243 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "/var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.244 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "/var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.259 186022 DEBUG oslo_concurrency.processutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.320 186022 DEBUG oslo_concurrency.processutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.321 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Acquiring lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.322 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.334 186022 DEBUG oslo_concurrency.processutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.390 186022 DEBUG oslo_concurrency.processutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.391 186022 DEBUG oslo_concurrency.processutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe,backing_fmt=raw /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.407 186022 DEBUG nova.policy [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '7c73fe2d06da4c34ab29da3c61a0989e', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5efd2bd3d0424bd99bd88ac5bfe7d457', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.431 186022 DEBUG oslo_concurrency.processutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe,backing_fmt=raw /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk 1073741824" returned: 0 in 0.040s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.432 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.109s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.432 186022 DEBUG oslo_concurrency.processutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.490 186022 DEBUG oslo_concurrency.processutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.492 186022 DEBUG nova.virt.disk.api [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Checking if we can resize image /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.493 186022 DEBUG oslo_concurrency.processutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.552 186022 DEBUG oslo_concurrency.processutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.554 186022 DEBUG nova.virt.disk.api [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Cannot resize image /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.555 186022 DEBUG nova.objects.instance [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lazy-loading 'migration_context' on Instance uuid 1c4634a9-de38-4683-abb9-3964b285a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.576 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.577 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Ensure instance console log exists: /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.577 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.578 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.578 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.879 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:37 compute-0 nova_compute[186018]: 2026-01-05 21:32:37.994 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:38 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 05 21:32:38 compute-0 podman[252097]: 2026-01-05 21:32:38.689648089 +0000 UTC m=+0.113979342 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, io.buildah.version=1.41.4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS)
Jan 05 21:32:38 compute-0 nova_compute[186018]: 2026-01-05 21:32:38.979 186022 DEBUG nova.network.neutron [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Successfully created port: cecba75e-30de-46e3-9539-c1911e784f2d _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 05 21:32:40 compute-0 nova_compute[186018]: 2026-01-05 21:32:40.991 186022 DEBUG nova.network.neutron [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Successfully updated port: cecba75e-30de-46e3-9539-c1911e784f2d _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 21:32:41 compute-0 nova_compute[186018]: 2026-01-05 21:32:41.008 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Acquiring lock "refresh_cache-1c4634a9-de38-4683-abb9-3964b285a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:32:41 compute-0 nova_compute[186018]: 2026-01-05 21:32:41.009 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Acquired lock "refresh_cache-1c4634a9-de38-4683-abb9-3964b285a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:32:41 compute-0 nova_compute[186018]: 2026-01-05 21:32:41.009 186022 DEBUG nova.network.neutron [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 21:32:41 compute-0 nova_compute[186018]: 2026-01-05 21:32:41.190 186022 DEBUG nova.compute.manager [req-a5d99bba-d1d3-4036-9f75-f205dc819b02 req-c88c98da-bef2-40f7-b3e9-a0b6520b1090 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received event network-changed-cecba75e-30de-46e3-9539-c1911e784f2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:32:41 compute-0 nova_compute[186018]: 2026-01-05 21:32:41.190 186022 DEBUG nova.compute.manager [req-a5d99bba-d1d3-4036-9f75-f205dc819b02 req-c88c98da-bef2-40f7-b3e9-a0b6520b1090 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Refreshing instance network info cache due to event network-changed-cecba75e-30de-46e3-9539-c1911e784f2d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:32:41 compute-0 nova_compute[186018]: 2026-01-05 21:32:41.191 186022 DEBUG oslo_concurrency.lockutils [req-a5d99bba-d1d3-4036-9f75-f205dc819b02 req-c88c98da-bef2-40f7-b3e9-a0b6520b1090 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-1c4634a9-de38-4683-abb9-3964b285a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:32:41 compute-0 nova_compute[186018]: 2026-01-05 21:32:41.295 186022 DEBUG nova.network.neutron [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 21:32:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:42.871 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:32:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:42.872 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:32:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:42.872 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:32:42 compute-0 nova_compute[186018]: 2026-01-05 21:32:42.881 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:42 compute-0 nova_compute[186018]: 2026-01-05 21:32:42.997 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.452 186022 DEBUG nova.network.neutron [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Updating instance_info_cache with network_info: [{"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.472 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Releasing lock "refresh_cache-1c4634a9-de38-4683-abb9-3964b285a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.473 186022 DEBUG nova.compute.manager [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Instance network_info: |[{"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.473 186022 DEBUG oslo_concurrency.lockutils [req-a5d99bba-d1d3-4036-9f75-f205dc819b02 req-c88c98da-bef2-40f7-b3e9-a0b6520b1090 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-1c4634a9-de38-4683-abb9-3964b285a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.474 186022 DEBUG nova.network.neutron [req-a5d99bba-d1d3-4036-9f75-f205dc819b02 req-c88c98da-bef2-40f7-b3e9-a0b6520b1090 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Refreshing network info cache for port cecba75e-30de-46e3-9539-c1911e784f2d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.477 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Start _get_guest_xml network_info=[{"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:29:29Z,direct_url=<?>,disk_format='qcow2',id=ebb2027f-05a6-465a-af75-b7da40a91332,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:29:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'encrypted': False, 'encryption_format': None, 'image_id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.484 186022 WARNING nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.490 186022 DEBUG nova.virt.libvirt.host [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.491 186022 DEBUG nova.virt.libvirt.host [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.500 186022 DEBUG nova.virt.libvirt.host [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.500 186022 DEBUG nova.virt.libvirt.host [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.501 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.501 186022 DEBUG nova.virt.hardware [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T21:29:28Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ce1138a2-4b82-4664-8860-711a956c0882',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:29:29Z,direct_url=<?>,disk_format='qcow2',id=ebb2027f-05a6-465a-af75-b7da40a91332,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:29:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.502 186022 DEBUG nova.virt.hardware [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.502 186022 DEBUG nova.virt.hardware [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.503 186022 DEBUG nova.virt.hardware [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.503 186022 DEBUG nova.virt.hardware [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.503 186022 DEBUG nova.virt.hardware [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.504 186022 DEBUG nova.virt.hardware [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.504 186022 DEBUG nova.virt.hardware [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.504 186022 DEBUG nova.virt.hardware [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.504 186022 DEBUG nova.virt.hardware [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.505 186022 DEBUG nova.virt.hardware [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.509 186022 DEBUG nova.virt.libvirt.vif [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:32:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1019046137',display_name='tempest-ServerActionsTestJSON-server-1019046137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1019046137',id=9,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCdsX/VW/otw2+baeo241R2QhmkVDaN24udXgw5ga/G5VloNjKs7iKGi9GFFfjKokOQxQ2hPiWL3KkIRK7GQwJhLRoUKXUhkfvs1aUx6Mef7xFXtmjR0ROHB22gCQ/YkTw==',key_name='tempest-keypair-962693419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5efd2bd3d0424bd99bd88ac5bfe7d457',ramdisk_id='',reservation_id='r-y4vmxuzn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-578788577',owner_user_name='tempest-ServerActionsTestJSON-578788577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:32:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7c73fe2d06da4c34ab29da3c61a0989e',uuid=1c4634a9-de38-4683-abb9-3964b285a21c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.509 186022 DEBUG nova.network.os_vif_util [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Converting VIF {"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.510 186022 DEBUG nova.network.os_vif_util [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:93:1b,bridge_name='br-int',has_traffic_filtering=True,id=cecba75e-30de-46e3-9539-c1911e784f2d,network=Network(9d140934-6988-43f2-b45f-49e5cf3de4b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecba75e-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.511 186022 DEBUG nova.objects.instance [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c4634a9-de38-4683-abb9-3964b285a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.550 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.552 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] End _get_guest_xml xml=<domain type="kvm">
Jan 05 21:32:43 compute-0 nova_compute[186018]:   <uuid>1c4634a9-de38-4683-abb9-3964b285a21c</uuid>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   <name>instance-00000009</name>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   <memory>131072</memory>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   <vcpu>1</vcpu>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   <metadata>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <nova:name>tempest-ServerActionsTestJSON-server-1019046137</nova:name>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <nova:creationTime>2026-01-05 21:32:43</nova:creationTime>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <nova:flavor name="m1.nano">
Jan 05 21:32:43 compute-0 nova_compute[186018]:         <nova:memory>128</nova:memory>
Jan 05 21:32:43 compute-0 nova_compute[186018]:         <nova:disk>1</nova:disk>
Jan 05 21:32:43 compute-0 nova_compute[186018]:         <nova:swap>0</nova:swap>
Jan 05 21:32:43 compute-0 nova_compute[186018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 05 21:32:43 compute-0 nova_compute[186018]:         <nova:vcpus>1</nova:vcpus>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       </nova:flavor>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <nova:owner>
Jan 05 21:32:43 compute-0 nova_compute[186018]:         <nova:user uuid="7c73fe2d06da4c34ab29da3c61a0989e">tempest-ServerActionsTestJSON-578788577-project-member</nova:user>
Jan 05 21:32:43 compute-0 nova_compute[186018]:         <nova:project uuid="5efd2bd3d0424bd99bd88ac5bfe7d457">tempest-ServerActionsTestJSON-578788577</nova:project>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       </nova:owner>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <nova:root type="image" uuid="ebb2027f-05a6-465a-af75-b7da40a91332"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <nova:ports>
Jan 05 21:32:43 compute-0 nova_compute[186018]:         <nova:port uuid="cecba75e-30de-46e3-9539-c1911e784f2d">
Jan 05 21:32:43 compute-0 nova_compute[186018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:         </nova:port>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       </nova:ports>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     </nova:instance>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   </metadata>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   <sysinfo type="smbios">
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <system>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <entry name="manufacturer">RDO</entry>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <entry name="product">OpenStack Compute</entry>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <entry name="serial">1c4634a9-de38-4683-abb9-3964b285a21c</entry>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <entry name="uuid">1c4634a9-de38-4683-abb9-3964b285a21c</entry>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <entry name="family">Virtual Machine</entry>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     </system>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   </sysinfo>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   <os>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <boot dev="hd"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <smbios mode="sysinfo"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   </os>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   <features>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <acpi/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <apic/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <vmcoreinfo/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   </features>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   <clock offset="utc">
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <timer name="hpet" present="no"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   </clock>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   <cpu mode="host-model" match="exact">
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   </cpu>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   <devices>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <target dev="vda" bus="virtio"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <disk type="file" device="cdrom">
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk.config"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <target dev="sda" bus="sata"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <interface type="ethernet">
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <mac address="fa:16:3e:f6:93:1b"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <mtu size="1442"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <target dev="tapcecba75e-30"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     </interface>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <serial type="pty">
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <log file="/var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/console.log" append="off"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     </serial>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <video>
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     </video>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <input type="tablet" bus="usb"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <rng model="virtio">
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <backend model="random">/dev/urandom</backend>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     </rng>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <controller type="usb" index="0"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     <memballoon model="virtio">
Jan 05 21:32:43 compute-0 nova_compute[186018]:       <stats period="10"/>
Jan 05 21:32:43 compute-0 nova_compute[186018]:     </memballoon>
Jan 05 21:32:43 compute-0 nova_compute[186018]:   </devices>
Jan 05 21:32:43 compute-0 nova_compute[186018]: </domain>
Jan 05 21:32:43 compute-0 nova_compute[186018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.554 186022 DEBUG nova.compute.manager [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Preparing to wait for external event network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.554 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Acquiring lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.554 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.555 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.556 186022 DEBUG nova.virt.libvirt.vif [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:32:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1019046137',display_name='tempest-ServerActionsTestJSON-server-1019046137',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1019046137',id=9,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCdsX/VW/otw2+baeo241R2QhmkVDaN24udXgw5ga/G5VloNjKs7iKGi9GFFfjKokOQxQ2hPiWL3KkIRK7GQwJhLRoUKXUhkfvs1aUx6Mef7xFXtmjR0ROHB22gCQ/YkTw==',key_name='tempest-keypair-962693419',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5efd2bd3d0424bd99bd88ac5bfe7d457',ramdisk_id='',reservation_id='r-y4vmxuzn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerActionsTestJSON-578788577',owner_user_name='tempest-ServerActionsTestJSON-578788577-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:32:37Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7c73fe2d06da4c34ab29da3c61a0989e',uuid=1c4634a9-de38-4683-abb9-3964b285a21c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.556 186022 DEBUG nova.network.os_vif_util [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Converting VIF {"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.557 186022 DEBUG nova.network.os_vif_util [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:93:1b,bridge_name='br-int',has_traffic_filtering=True,id=cecba75e-30de-46e3-9539-c1911e784f2d,network=Network(9d140934-6988-43f2-b45f-49e5cf3de4b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecba75e-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.557 186022 DEBUG os_vif [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:93:1b,bridge_name='br-int',has_traffic_filtering=True,id=cecba75e-30de-46e3-9539-c1911e784f2d,network=Network(9d140934-6988-43f2-b45f-49e5cf3de4b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecba75e-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.558 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.559 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.559 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.563 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.563 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcecba75e-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.564 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcecba75e-30, col_values=(('external_ids', {'iface-id': 'cecba75e-30de-46e3-9539-c1911e784f2d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:93:1b', 'vm-uuid': '1c4634a9-de38-4683-abb9-3964b285a21c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:32:43 compute-0 NetworkManager[56598]: <info>  [1767648763.5666] manager: (tapcecba75e-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/48)
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.565 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.569 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.573 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.574 186022 INFO os_vif [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:93:1b,bridge_name='br-int',has_traffic_filtering=True,id=cecba75e-30de-46e3-9539-c1911e784f2d,network=Network(9d140934-6988-43f2-b45f-49e5cf3de4b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecba75e-30')
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.671 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.672 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.672 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] No VIF found with MAC fa:16:3e:f6:93:1b, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 21:32:43 compute-0 nova_compute[186018]: 2026-01-05 21:32:43.673 186022 INFO nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Using config drive
Jan 05 21:32:44 compute-0 nova_compute[186018]: 2026-01-05 21:32:44.209 186022 INFO nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Creating config drive at /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk.config
Jan 05 21:32:44 compute-0 nova_compute[186018]: 2026-01-05 21:32:44.216 186022 DEBUG oslo_concurrency.processutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1fz_xeuo execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:32:44 compute-0 nova_compute[186018]: 2026-01-05 21:32:44.345 186022 DEBUG oslo_concurrency.processutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp1fz_xeuo" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:32:44 compute-0 kernel: tapcecba75e-30: entered promiscuous mode
Jan 05 21:32:44 compute-0 NetworkManager[56598]: <info>  [1767648764.4167] manager: (tapcecba75e-30): new Tun device (/org/freedesktop/NetworkManager/Devices/49)
Jan 05 21:32:44 compute-0 nova_compute[186018]: 2026-01-05 21:32:44.428 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:44 compute-0 ovn_controller[98229]: 2026-01-05T21:32:44Z|00106|binding|INFO|Claiming lport cecba75e-30de-46e3-9539-c1911e784f2d for this chassis.
Jan 05 21:32:44 compute-0 ovn_controller[98229]: 2026-01-05T21:32:44Z|00107|binding|INFO|cecba75e-30de-46e3-9539-c1911e784f2d: Claiming fa:16:3e:f6:93:1b 10.100.0.4
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.437 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:93:1b 10.100.0.4'], port_security=['fa:16:3e:f6:93:1b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1c4634a9-de38-4683-abb9-3964b285a21c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d140934-6988-43f2-b45f-49e5cf3de4b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5efd2bd3d0424bd99bd88ac5bfe7d457', 'neutron:revision_number': '2', 'neutron:security_group_ids': '842e8104-5a29-4d14-99fa-0f1362c35beb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dc7cb32-4733-47ef-890a-22095c3cd6b2, chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=cecba75e-30de-46e3-9539-c1911e784f2d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.439 107689 INFO neutron.agent.ovn.metadata.agent [-] Port cecba75e-30de-46e3-9539-c1911e784f2d in datapath 9d140934-6988-43f2-b45f-49e5cf3de4b0 bound to our chassis
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.440 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d140934-6988-43f2-b45f-49e5cf3de4b0
Jan 05 21:32:44 compute-0 ovn_controller[98229]: 2026-01-05T21:32:44Z|00108|binding|INFO|Setting lport cecba75e-30de-46e3-9539-c1911e784f2d ovn-installed in OVS
Jan 05 21:32:44 compute-0 ovn_controller[98229]: 2026-01-05T21:32:44Z|00109|binding|INFO|Setting lport cecba75e-30de-46e3-9539-c1911e784f2d up in Southbound
Jan 05 21:32:44 compute-0 nova_compute[186018]: 2026-01-05 21:32:44.444 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:44 compute-0 nova_compute[186018]: 2026-01-05 21:32:44.447 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:44 compute-0 nova_compute[186018]: 2026-01-05 21:32:44.450 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.455 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[23edb970-d48b-4cbe-bfc3-5d607891bf5e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.456 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9d140934-61 in ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.458 240489 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9d140934-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.458 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[b3f55b18-8036-48c7-bcb6-e592626623b2]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.459 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[1dd8d91d-f53b-4844-b0c7-e7ecd2779f08]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.472 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[63a76a12-260b-4f5b-a83e-6cdbf951caad]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:32:44 compute-0 systemd-machined[157312]: New machine qemu-9-instance-00000009.
Jan 05 21:32:44 compute-0 systemd[1]: Started Virtual Machine qemu-9-instance-00000009.
Jan 05 21:32:44 compute-0 systemd-udevd[252137]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.498 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[bf3baf50-5c6b-4def-b062-76f47b857b5b]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:32:44 compute-0 NetworkManager[56598]: <info>  [1767648764.5128] device (tapcecba75e-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 21:32:44 compute-0 NetworkManager[56598]: <info>  [1767648764.5172] device (tapcecba75e-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.529 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[6a742d14-9378-43cc-8929-c33720a734c9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.535 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[2677f168-82b7-4f29-bfc0-c8fb8c18708a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:32:44 compute-0 NetworkManager[56598]: <info>  [1767648764.5376] manager: (tap9d140934-60): new Veth device (/org/freedesktop/NetworkManager/Devices/50)
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.567 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[4ff8502a-789f-4065-8e45-fa5c33d5eec1]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.571 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[e1772d48-e79d-452b-b016-a4f81b391d7f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:32:44 compute-0 NetworkManager[56598]: <info>  [1767648764.5955] device (tap9d140934-60): carrier: link connected
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.601 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[11cf1864-27e9-482c-a402-c753b34c2d11]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.621 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[d18e4ba4-cb46-4ff9-888e-1b296f8ef351]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d140934-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:28:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550529, 'reachable_time': 29630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252167, 'error': None, 'target': 'ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.642 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[95e1bb8e-1488-402f-a232-a8869cecacf3]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:285f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 550529, 'tstamp': 550529}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252168, 'error': None, 'target': 'ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.664 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[447798bc-0144-442e-b001-a9a4a4061e4f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d140934-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:28:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 29], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550529, 'reachable_time': 29630, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252169, 'error': None, 'target': 'ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.696 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[bf7ea122-1165-4493-b8ca-d05cb65989d6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.748 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[408271ac-5ce3-4d47-93a8-8f631da756eb]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.750 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d140934-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.750 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.751 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d140934-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:32:44 compute-0 nova_compute[186018]: 2026-01-05 21:32:44.754 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:44 compute-0 kernel: tap9d140934-60: entered promiscuous mode
Jan 05 21:32:44 compute-0 NetworkManager[56598]: <info>  [1767648764.7555] manager: (tap9d140934-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/51)
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.758 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d140934-60, col_values=(('external_ids', {'iface-id': '0fbb4d95-b8f2-4898-a3d0-8390d76218f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:32:44 compute-0 nova_compute[186018]: 2026-01-05 21:32:44.759 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:44 compute-0 ovn_controller[98229]: 2026-01-05T21:32:44Z|00110|binding|INFO|Releasing lport 0fbb4d95-b8f2-4898-a3d0-8390d76218f2 from this chassis (sb_readonly=0)
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.761 107689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9d140934-6988-43f2-b45f-49e5cf3de4b0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9d140934-6988-43f2-b45f-49e5cf3de4b0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.762 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[4a1be460-49df-4a7c-8422-19761594647f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.763 107689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: global
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     log         /dev/log local0 debug
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     log-tag     haproxy-metadata-proxy-9d140934-6988-43f2-b45f-49e5cf3de4b0
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     user        root
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     group       root
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     maxconn     1024
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     pidfile     /var/lib/neutron/external/pids/9d140934-6988-43f2-b45f-49e5cf3de4b0.pid.haproxy
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     daemon
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: defaults
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     log global
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     mode http
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     option httplog
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     option dontlognull
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     option http-server-close
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     option forwardfor
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     retries                 3
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     timeout http-request    30s
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     timeout connect         30s
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     timeout client          32s
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     timeout server          32s
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     timeout http-keep-alive 30s
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: listen listener
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     bind 169.254.169.254:80
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     server metadata /var/lib/neutron/metadata_proxy
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:     http-request add-header X-OVN-Network-ID 9d140934-6988-43f2-b45f-49e5cf3de4b0
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 05 21:32:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:32:44.764 107689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0', 'env', 'PROCESS_TAG=haproxy-9d140934-6988-43f2-b45f-49e5cf3de4b0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9d140934-6988-43f2-b45f-49e5cf3de4b0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 05 21:32:44 compute-0 nova_compute[186018]: 2026-01-05 21:32:44.771 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.098 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648765.0985675, 1c4634a9-de38-4683-abb9-3964b285a21c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.101 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] VM Started (Lifecycle Event)
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.119 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.126 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648765.0986733, 1c4634a9-de38-4683-abb9-3964b285a21c => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.126 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] VM Paused (Lifecycle Event)
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.143 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.148 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.166 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:32:45 compute-0 podman[252204]: 2026-01-05 21:32:45.188149378 +0000 UTC m=+0.059390784 container create 28b7beb23a578b4341d2bdef0c63729a67ea7db3a684055ca8a87c9cca62fbd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:32:45 compute-0 systemd[1]: Started libpod-conmon-28b7beb23a578b4341d2bdef0c63729a67ea7db3a684055ca8a87c9cca62fbd0.scope.
Jan 05 21:32:45 compute-0 podman[252204]: 2026-01-05 21:32:45.159945886 +0000 UTC m=+0.031187312 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 21:32:45 compute-0 systemd[1]: Started libcrun container.
Jan 05 21:32:45 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/db9694e3caf6296d1c45c9af61c49d800e9f0e65beaa6405eeb0a11a15582ed9/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 21:32:45 compute-0 podman[252204]: 2026-01-05 21:32:45.285422119 +0000 UTC m=+0.156663535 container init 28b7beb23a578b4341d2bdef0c63729a67ea7db3a684055ca8a87c9cca62fbd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:32:45 compute-0 podman[252204]: 2026-01-05 21:32:45.292519726 +0000 UTC m=+0.163761132 container start 28b7beb23a578b4341d2bdef0c63729a67ea7db3a684055ca8a87c9cca62fbd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:32:45 compute-0 neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0[252219]: [NOTICE]   (252223) : New worker (252225) forked
Jan 05 21:32:45 compute-0 neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0[252219]: [NOTICE]   (252223) : Loading success.
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.796 186022 DEBUG nova.compute.manager [req-f62d2a14-2a65-4175-965e-a7a9170b9f5f req-c0e4a6f1-7aaa-4d3f-973e-44e1d129f644 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received event network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.798 186022 DEBUG oslo_concurrency.lockutils [req-f62d2a14-2a65-4175-965e-a7a9170b9f5f req-c0e4a6f1-7aaa-4d3f-973e-44e1d129f644 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.798 186022 DEBUG oslo_concurrency.lockutils [req-f62d2a14-2a65-4175-965e-a7a9170b9f5f req-c0e4a6f1-7aaa-4d3f-973e-44e1d129f644 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.799 186022 DEBUG oslo_concurrency.lockutils [req-f62d2a14-2a65-4175-965e-a7a9170b9f5f req-c0e4a6f1-7aaa-4d3f-973e-44e1d129f644 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.800 186022 DEBUG nova.compute.manager [req-f62d2a14-2a65-4175-965e-a7a9170b9f5f req-c0e4a6f1-7aaa-4d3f-973e-44e1d129f644 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Processing event network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.802 186022 DEBUG nova.compute.manager [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.813 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.814 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648765.8139358, 1c4634a9-de38-4683-abb9-3964b285a21c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.815 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] VM Resumed (Lifecycle Event)
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.827 186022 INFO nova.virt.libvirt.driver [-] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Instance spawned successfully.
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.827 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.837 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.855 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.862 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.863 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.864 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.865 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.867 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.868 186022 DEBUG nova.virt.libvirt.driver [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:32:45 compute-0 nova_compute[186018]: 2026-01-05 21:32:45.879 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:32:46 compute-0 nova_compute[186018]: 2026-01-05 21:32:46.025 186022 INFO nova.compute.manager [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Took 8.78 seconds to spawn the instance on the hypervisor.
Jan 05 21:32:46 compute-0 nova_compute[186018]: 2026-01-05 21:32:46.025 186022 DEBUG nova.compute.manager [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:32:46 compute-0 nova_compute[186018]: 2026-01-05 21:32:46.085 186022 INFO nova.compute.manager [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Took 9.25 seconds to build instance.
Jan 05 21:32:46 compute-0 nova_compute[186018]: 2026-01-05 21:32:46.103 186022 DEBUG oslo_concurrency.lockutils [None req-2c753c57-ff52-4354-8c80-4a02d56843b5 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.348s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:32:46 compute-0 nova_compute[186018]: 2026-01-05 21:32:46.127 186022 DEBUG nova.network.neutron [req-a5d99bba-d1d3-4036-9f75-f205dc819b02 req-c88c98da-bef2-40f7-b3e9-a0b6520b1090 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Updated VIF entry in instance network info cache for port cecba75e-30de-46e3-9539-c1911e784f2d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:32:46 compute-0 nova_compute[186018]: 2026-01-05 21:32:46.128 186022 DEBUG nova.network.neutron [req-a5d99bba-d1d3-4036-9f75-f205dc819b02 req-c88c98da-bef2-40f7-b3e9-a0b6520b1090 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Updating instance_info_cache with network_info: [{"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:32:46 compute-0 nova_compute[186018]: 2026-01-05 21:32:46.143 186022 DEBUG oslo_concurrency.lockutils [req-a5d99bba-d1d3-4036-9f75-f205dc819b02 req-c88c98da-bef2-40f7-b3e9-a0b6520b1090 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-1c4634a9-de38-4683-abb9-3964b285a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:32:46 compute-0 nova_compute[186018]: 2026-01-05 21:32:46.542 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:46 compute-0 podman[252235]: 2026-01-05 21:32:46.761480721 +0000 UTC m=+0.095464924 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, release=1755695350, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., name=ubi9-minimal, architecture=x86_64, io.openshift.expose-services=, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:32:46 compute-0 podman[252234]: 2026-01-05 21:32:46.848163604 +0000 UTC m=+0.177025792 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 21:32:47 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 05 21:32:47 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 05 21:32:47 compute-0 nova_compute[186018]: 2026-01-05 21:32:47.618 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:47 compute-0 nova_compute[186018]: 2026-01-05 21:32:47.883 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:47 compute-0 nova_compute[186018]: 2026-01-05 21:32:47.917 186022 DEBUG nova.compute.manager [req-952d458c-ab02-4dd8-8c1e-239eed801eee req-613cbe8d-e061-4b08-98db-3e3f0c3fa3cc 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received event network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:32:47 compute-0 nova_compute[186018]: 2026-01-05 21:32:47.918 186022 DEBUG oslo_concurrency.lockutils [req-952d458c-ab02-4dd8-8c1e-239eed801eee req-613cbe8d-e061-4b08-98db-3e3f0c3fa3cc 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:32:47 compute-0 nova_compute[186018]: 2026-01-05 21:32:47.919 186022 DEBUG oslo_concurrency.lockutils [req-952d458c-ab02-4dd8-8c1e-239eed801eee req-613cbe8d-e061-4b08-98db-3e3f0c3fa3cc 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:32:47 compute-0 nova_compute[186018]: 2026-01-05 21:32:47.919 186022 DEBUG oslo_concurrency.lockutils [req-952d458c-ab02-4dd8-8c1e-239eed801eee req-613cbe8d-e061-4b08-98db-3e3f0c3fa3cc 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:32:47 compute-0 nova_compute[186018]: 2026-01-05 21:32:47.920 186022 DEBUG nova.compute.manager [req-952d458c-ab02-4dd8-8c1e-239eed801eee req-613cbe8d-e061-4b08-98db-3e3f0c3fa3cc 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] No waiting events found dispatching network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:32:47 compute-0 nova_compute[186018]: 2026-01-05 21:32:47.921 186022 WARNING nova.compute.manager [req-952d458c-ab02-4dd8-8c1e-239eed801eee req-613cbe8d-e061-4b08-98db-3e3f0c3fa3cc 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received unexpected event network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d for instance with vm_state active and task_state None.
Jan 05 21:32:48 compute-0 nova_compute[186018]: 2026-01-05 21:32:48.435 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:48 compute-0 nova_compute[186018]: 2026-01-05 21:32:48.567 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:49 compute-0 nova_compute[186018]: 2026-01-05 21:32:49.431 186022 DEBUG nova.compute.manager [req-fec4f932-d469-41b2-98bf-4831516dc5e5 req-f17d2dd2-525c-4017-b4b1-1c8690562802 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received event network-changed-cecba75e-30de-46e3-9539-c1911e784f2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:32:49 compute-0 nova_compute[186018]: 2026-01-05 21:32:49.431 186022 DEBUG nova.compute.manager [req-fec4f932-d469-41b2-98bf-4831516dc5e5 req-f17d2dd2-525c-4017-b4b1-1c8690562802 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Refreshing instance network info cache due to event network-changed-cecba75e-30de-46e3-9539-c1911e784f2d. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:32:49 compute-0 nova_compute[186018]: 2026-01-05 21:32:49.431 186022 DEBUG oslo_concurrency.lockutils [req-fec4f932-d469-41b2-98bf-4831516dc5e5 req-f17d2dd2-525c-4017-b4b1-1c8690562802 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-1c4634a9-de38-4683-abb9-3964b285a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:32:49 compute-0 nova_compute[186018]: 2026-01-05 21:32:49.432 186022 DEBUG oslo_concurrency.lockutils [req-fec4f932-d469-41b2-98bf-4831516dc5e5 req-f17d2dd2-525c-4017-b4b1-1c8690562802 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-1c4634a9-de38-4683-abb9-3964b285a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:32:49 compute-0 nova_compute[186018]: 2026-01-05 21:32:49.432 186022 DEBUG nova.network.neutron [req-fec4f932-d469-41b2-98bf-4831516dc5e5 req-f17d2dd2-525c-4017-b4b1-1c8690562802 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Refreshing network info cache for port cecba75e-30de-46e3-9539-c1911e784f2d _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:32:51 compute-0 podman[252298]: 2026-01-05 21:32:51.755880863 +0000 UTC m=+0.103559327 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 05 21:32:51 compute-0 podman[252299]: 2026-01-05 21:32:51.783402758 +0000 UTC m=+0.117205347 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:32:52 compute-0 nova_compute[186018]: 2026-01-05 21:32:52.886 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:54 compute-0 nova_compute[186018]: 2026-01-05 21:32:54.281 186022 DEBUG nova.network.neutron [req-fec4f932-d469-41b2-98bf-4831516dc5e5 req-f17d2dd2-525c-4017-b4b1-1c8690562802 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Updated VIF entry in instance network info cache for port cecba75e-30de-46e3-9539-c1911e784f2d. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:32:54 compute-0 nova_compute[186018]: 2026-01-05 21:32:54.283 186022 DEBUG nova.network.neutron [req-fec4f932-d469-41b2-98bf-4831516dc5e5 req-f17d2dd2-525c-4017-b4b1-1c8690562802 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Updating instance_info_cache with network_info: [{"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:32:54 compute-0 nova_compute[186018]: 2026-01-05 21:32:54.285 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:54 compute-0 nova_compute[186018]: 2026-01-05 21:32:54.316 186022 DEBUG oslo_concurrency.lockutils [req-fec4f932-d469-41b2-98bf-4831516dc5e5 req-f17d2dd2-525c-4017-b4b1-1c8690562802 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-1c4634a9-de38-4683-abb9-3964b285a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:32:57 compute-0 nova_compute[186018]: 2026-01-05 21:32:57.891 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:58 compute-0 nova_compute[186018]: 2026-01-05 21:32:58.599 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:58 compute-0 podman[252339]: 2026-01-05 21:32:58.742417226 +0000 UTC m=+0.083193832 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:32:59 compute-0 nova_compute[186018]: 2026-01-05 21:32:59.288 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:59 compute-0 nova_compute[186018]: 2026-01-05 21:32:59.355 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:59 compute-0 ovn_controller[98229]: 2026-01-05T21:32:59Z|00111|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:32:59 compute-0 ovn_controller[98229]: 2026-01-05T21:32:59Z|00112|binding|INFO|Releasing lport 0fbb4d95-b8f2-4898-a3d0-8390d76218f2 from this chassis (sb_readonly=0)
Jan 05 21:32:59 compute-0 nova_compute[186018]: 2026-01-05 21:32:59.690 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:32:59 compute-0 podman[202426]: time="2026-01-05T21:32:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:32:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:32:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:32:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:32:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4823 "" "Go-http-client/1.1"
Jan 05 21:33:01 compute-0 openstack_network_exporter[205720]: ERROR   21:33:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:33:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:33:01 compute-0 openstack_network_exporter[205720]: ERROR   21:33:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:33:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:33:02 compute-0 nova_compute[186018]: 2026-01-05 21:33:02.585 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:02 compute-0 nova_compute[186018]: 2026-01-05 21:33:02.894 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:04 compute-0 nova_compute[186018]: 2026-01-05 21:33:04.292 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:05 compute-0 podman[252363]: 2026-01-05 21:33:05.744478905 +0000 UTC m=+0.088142372 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.buildah.version=1.29.0, architecture=x86_64, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, name=ubi9, build-date=2024-09-18T21:23:30, container_name=kepler, io.openshift.tags=base rhel9, io.openshift.expose-services=, managed_by=edpm_ansible, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 05 21:33:05 compute-0 podman[252364]: 2026-01-05 21:33:05.752685571 +0000 UTC m=+0.087377992 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2)
Jan 05 21:33:07 compute-0 nova_compute[186018]: 2026-01-05 21:33:07.524 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:07 compute-0 nova_compute[186018]: 2026-01-05 21:33:07.548 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.788 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.789 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.789 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.799 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '62f57876-af2d-4771-bffd-c87b7755cc5c', 'name': 'tempest-AttachInterfacesUnderV243Test-server-306597775', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e0899289c7dd4631b4fa69150a914123', 'user_id': '168ad639a6ed41c8bd954c434807ef6c', 'hostId': 'c3f8712f401137fbbdc6483d36c041bcfcf3dfa8c8dce0a58aba2f1b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.807 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.808 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.808 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.810 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.813 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 1c4634a9-de38-4683-abb9-3964b285a21c from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 05 21:33:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:07.815 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/1c4634a9-de38-4683-abb9-3964b285a21c -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f276ecb8e60cef1797549a0d2bcc21ef3546f9ad65f5da0e31c0a93bf2cbb910" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 05 21:33:07 compute-0 nova_compute[186018]: 2026-01-05 21:33:07.898 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.713 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1980 Content-Type: application/json Date: Mon, 05 Jan 2026 21:33:07 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-68b1e9db-6751-4ca0-a1e4-a6ca5340f83a x-openstack-request-id: req-68b1e9db-6751-4ca0-a1e4-a6ca5340f83a _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.714 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "1c4634a9-de38-4683-abb9-3964b285a21c", "name": "tempest-ServerActionsTestJSON-server-1019046137", "status": "ACTIVE", "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "user_id": "7c73fe2d06da4c34ab29da3c61a0989e", "metadata": {}, "hostId": "f186dcb25d1739191ff6b9138d7761bd6ebe99ecf98eeef466754ca8", "image": {"id": "ebb2027f-05a6-465a-af75-b7da40a91332", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/ebb2027f-05a6-465a-af75-b7da40a91332"}]}, "flavor": {"id": "ce1138a2-4b82-4664-8860-711a956c0882", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/ce1138a2-4b82-4664-8860-711a956c0882"}]}, "created": "2026-01-05T21:32:35Z", "updated": "2026-01-05T21:32:46Z", "addresses": {"tempest-ServerActionsTestJSON-2029168979-network": [{"version": 4, "addr": "10.100.0.4", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f6:93:1b"}, {"version": 4, "addr": "192.168.122.233", "OS-EXT-IPS:type": "floating", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f6:93:1b"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/1c4634a9-de38-4683-abb9-3964b285a21c"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/1c4634a9-de38-4683-abb9-3964b285a21c"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": "tempest-keypair-962693419", "OS-SRV-USG:launched_at": "2026-01-05T21:32:46.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "tempest-securitygroup--1484613688"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-00000009", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.714 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/1c4634a9-de38-4683-abb9-3964b285a21c used request id req-68b1e9db-6751-4ca0-a1e4-a6ca5340f83a request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.715 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '1c4634a9-de38-4683-abb9-3964b285a21c', 'name': 'tempest-ServerActionsTestJSON-server-1019046137', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000009', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '5efd2bd3d0424bd99bd88ac5bfe7d457', 'user_id': '7c73fe2d06da4c34ab29da3c61a0989e', 'hostId': 'f186dcb25d1739191ff6b9138d7761bd6ebe99ecf98eeef466754ca8', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.715 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.716 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.716 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.716 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.716 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.717 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.717 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.717 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.717 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.717 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:33:08.716280) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.718 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:33:08.717655) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.721 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.725 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 1c4634a9-de38-4683-abb9-3964b285a21c / tapcecba75e-30 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.725 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/network.incoming.packets volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.725 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.725 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.725 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.725 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.725 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.726 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.726 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.726 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.726 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.726 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.726 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.727 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.727 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.727 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:33:08.725998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.727 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.727 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.727 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/network.outgoing.packets volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.727 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.728 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.728 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.728 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.728 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.728 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:33:08.727199) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.728 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:33:08.728424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.728 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.729 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.729 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.729 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.729 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.729 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.729 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.729 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:33:08.729338) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.729 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/network.outgoing.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.730 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.730 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.730 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.730 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.730 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.730 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.730 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes.delta volume: 70 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.730 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.731 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.731 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.731 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.731 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.731 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.731 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:33:08.730592) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.731 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.731 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.732 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1019046137>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1019046137>]
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.732 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-05T21:33:08.731818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.732 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.732 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.732 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.732 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.732 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.732 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.733 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.733 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.733 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.733 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.733 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:33:08.732818) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.733 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.733 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.734 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.734 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.734 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.734 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.735 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.735 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.735 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.735 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.735 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:33:08.734041) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.735 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.735 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:33:08.735514) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.753 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.753 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.774 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.allocation volume: 204800 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.775 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.775 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.775 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.775 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.775 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.775 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.775 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.776 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.776 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.776 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.776 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.776 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:33:08.775905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.776 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.777 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.777 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.777 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.777 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.777 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1019046137>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: tempest-ServerActionsTestJSON-server-1019046137>]
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.777 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.777 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.778 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.778 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-05T21:33:08.777150) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.778 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.778 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.778 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.778 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.779 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.779 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.779 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:33:08.778477) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.779 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.779 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.779 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.780 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.780 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:33:08.780071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.802 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/memory.usage volume: 42.72265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.824 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/memory.usage volume: Unavailable _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.824 14 WARNING ceilometer.compute.pollsters [-] memory.usage statistic in not available for instance 1c4634a9-de38-4683-abb9-3964b285a21c: ceilometer.compute.pollsters.NoVolumeException
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.825 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.825 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.825 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.825 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.825 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.825 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.825 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.825 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.826 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.826 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.826 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.826 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.826 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.827 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.827 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.827 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:33:08.825534) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.827 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.827 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.827 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.828 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.828 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.828 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.828 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.828 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:33:08.826980) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.828 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.828 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:33:08.828647) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.875 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 31029760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.875 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.914 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.read.bytes volume: 23775232 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.914 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.read.bytes volume: 2048 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.914 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.915 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.915 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.915 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.915 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.915 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.915 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes volume: 4311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.915 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/network.incoming.bytes volume: 90 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.916 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.916 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.916 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.916 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.916 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.916 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.916 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 519177861 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.916 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 51692234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.917 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.read.latency volume: 417405183 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.917 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.read.latency volume: 2249490 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.917 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.917 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.918 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.918 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:33:08.915461) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.918 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:33:08.916636) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.918 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 1138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.919 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:33:08.918612) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.919 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.919 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.read.requests volume: 760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.919 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.read.requests volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.920 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.920 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.920 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.920 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.920 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.920 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.920 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.usage volume: 196624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.920 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.921 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.921 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.921 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.921 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.921 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.921 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.921 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 73068544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:33:08.920211) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.922 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:33:08.921763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.922 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.922 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.922 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.922 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.923 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.923 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.923 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.923 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.923 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.923 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/cpu volume: 35050000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.923 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/cpu volume: 22510000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.924 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.924 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.924 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.924 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.924 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.924 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:33:08.923478) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.924 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.924 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 13557622904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.925 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.925 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.925 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.925 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.925 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.926 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.926 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.926 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.926 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.926 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.926 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.926 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.927 14 DEBUG ceilometer.compute.pollsters [-] 1c4634a9-de38-4683-abb9-3964b285a21c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.927 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.928 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:33:08.924727) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.928 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:33:08.926330) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.928 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.928 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.928 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.929 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.929 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.929 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.929 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.929 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.929 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.930 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.930 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.930 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.930 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.930 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.931 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.931 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.931 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.931 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.931 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.932 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.932 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.932 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.932 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.932 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.933 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:33:08.933 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:33:09 compute-0 nova_compute[186018]: 2026-01-05 21:33:09.296 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:09 compute-0 podman[252402]: 2026-01-05 21:33:09.781125781 +0000 UTC m=+0.121536301 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 05 21:33:09 compute-0 ovn_controller[98229]: 2026-01-05T21:33:09Z|00113|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:33:09 compute-0 ovn_controller[98229]: 2026-01-05T21:33:09Z|00114|binding|INFO|Releasing lport 0fbb4d95-b8f2-4898-a3d0-8390d76218f2 from this chassis (sb_readonly=0)
Jan 05 21:33:10 compute-0 nova_compute[186018]: 2026-01-05 21:33:10.082 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:11 compute-0 nova_compute[186018]: 2026-01-05 21:33:11.470 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:12 compute-0 nova_compute[186018]: 2026-01-05 21:33:12.902 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:14 compute-0 nova_compute[186018]: 2026-01-05 21:33:14.299 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:16 compute-0 nova_compute[186018]: 2026-01-05 21:33:16.821 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Acquiring lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:16 compute-0 nova_compute[186018]: 2026-01-05 21:33:16.822 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:16 compute-0 nova_compute[186018]: 2026-01-05 21:33:16.856 186022 DEBUG nova.compute.manager [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 21:33:16 compute-0 nova_compute[186018]: 2026-01-05 21:33:16.940 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:16 compute-0 nova_compute[186018]: 2026-01-05 21:33:16.942 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:16 compute-0 nova_compute[186018]: 2026-01-05 21:33:16.953 186022 DEBUG nova.virt.hardware [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 21:33:16 compute-0 nova_compute[186018]: 2026-01-05 21:33:16.954 186022 INFO nova.compute.claims [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Claim successful on node compute-0.ctlplane.example.com
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.131 186022 DEBUG nova.compute.provider_tree [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.153 186022 DEBUG nova.scheduler.client.report [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.190 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.248s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.191 186022 DEBUG nova.compute.manager [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.276 186022 DEBUG nova.compute.manager [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.277 186022 DEBUG nova.network.neutron [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.296 186022 INFO nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.314 186022 DEBUG nova.compute.manager [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.396 186022 DEBUG nova.compute.manager [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.397 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.397 186022 INFO nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Creating image(s)
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.398 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Acquiring lock "/var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.398 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "/var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.399 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "/var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.429 186022 DEBUG oslo_concurrency.processutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.458 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.529 186022 DEBUG oslo_concurrency.processutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.101s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.531 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Acquiring lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.531 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.556 186022 DEBUG oslo_concurrency.processutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.595 186022 DEBUG nova.policy [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '1b776719a870485db8e8ec3697bac537', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': 'f530e5001be644ada25ea22d2fc918bb', 'project_domain_id': 'default', 'roles': ['reader', 'member'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.649 186022 DEBUG oslo_concurrency.processutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.093s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.650 186022 DEBUG oslo_concurrency.processutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe,backing_fmt=raw /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.695 186022 DEBUG oslo_concurrency.processutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe,backing_fmt=raw /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk 1073741824" returned: 0 in 0.045s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.696 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.165s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.697 186022 DEBUG oslo_concurrency.processutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:17 compute-0 podman[252428]: 2026-01-05 21:33:17.727499663 +0000 UTC m=+0.072523651 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, vcs-type=git, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, vendor=Red Hat, Inc., io.buildah.version=1.33.7, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.756 186022 DEBUG oslo_concurrency.processutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.757 186022 DEBUG nova.virt.disk.api [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Checking if we can resize image /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 21:33:17 compute-0 podman[252427]: 2026-01-05 21:33:17.75739647 +0000 UTC m=+0.106310490 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.757 186022 DEBUG oslo_concurrency.processutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.814 186022 DEBUG oslo_concurrency.processutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.815 186022 DEBUG nova.virt.disk.api [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Cannot resize image /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.815 186022 DEBUG nova.objects.instance [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lazy-loading 'migration_context' on Instance uuid 8123e49e-6aaf-4e97-9f0e-4039061d12d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.841 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.841 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Ensure instance console log exists: /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.841 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.842 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.842 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:17 compute-0 nova_compute[186018]: 2026-01-05 21:33:17.904 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:18 compute-0 nova_compute[186018]: 2026-01-05 21:33:18.857 186022 DEBUG nova.network.neutron [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Successfully created port: 8a773115-5cfe-4366-97f0-643e66599184 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 05 21:33:19 compute-0 nova_compute[186018]: 2026-01-05 21:33:19.302 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:20 compute-0 nova_compute[186018]: 2026-01-05 21:33:20.040 186022 DEBUG nova.network.neutron [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Successfully updated port: 8a773115-5cfe-4366-97f0-643e66599184 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 21:33:20 compute-0 nova_compute[186018]: 2026-01-05 21:33:20.057 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Acquiring lock "refresh_cache-8123e49e-6aaf-4e97-9f0e-4039061d12d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:33:20 compute-0 nova_compute[186018]: 2026-01-05 21:33:20.058 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Acquired lock "refresh_cache-8123e49e-6aaf-4e97-9f0e-4039061d12d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:33:20 compute-0 nova_compute[186018]: 2026-01-05 21:33:20.059 186022 DEBUG nova.network.neutron [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 21:33:20 compute-0 nova_compute[186018]: 2026-01-05 21:33:20.142 186022 DEBUG nova.compute.manager [req-5a168d53-9673-46fc-8787-524502def1e6 req-7d652f33-ae26-4ad8-866a-8e50bc5966f1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Received event network-changed-8a773115-5cfe-4366-97f0-643e66599184 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:33:20 compute-0 nova_compute[186018]: 2026-01-05 21:33:20.143 186022 DEBUG nova.compute.manager [req-5a168d53-9673-46fc-8787-524502def1e6 req-7d652f33-ae26-4ad8-866a-8e50bc5966f1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Refreshing instance network info cache due to event network-changed-8a773115-5cfe-4366-97f0-643e66599184. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:33:20 compute-0 nova_compute[186018]: 2026-01-05 21:33:20.145 186022 DEBUG oslo_concurrency.lockutils [req-5a168d53-9673-46fc-8787-524502def1e6 req-7d652f33-ae26-4ad8-866a-8e50bc5966f1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-8123e49e-6aaf-4e97-9f0e-4039061d12d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:33:20 compute-0 nova_compute[186018]: 2026-01-05 21:33:20.216 186022 DEBUG nova.network.neutron [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 21:33:20 compute-0 nova_compute[186018]: 2026-01-05 21:33:20.604 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:20 compute-0 ovn_controller[98229]: 2026-01-05T21:33:20Z|00014|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:93:1b 10.100.0.4
Jan 05 21:33:20 compute-0 ovn_controller[98229]: 2026-01-05T21:33:20Z|00015|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:93:1b 10.100.0.4
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.724 186022 DEBUG nova.network.neutron [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Updating instance_info_cache with network_info: [{"id": "8a773115-5cfe-4366-97f0-643e66599184", "address": "fa:16:3e:9e:4e:dc", "network": {"id": "aae0d8ab-f4c2-45a3-98ea-6057c14a083d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1267765400-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f530e5001be644ada25ea22d2fc918bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a773115-5c", "ovs_interfaceid": "8a773115-5cfe-4366-97f0-643e66599184", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.741 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Releasing lock "refresh_cache-8123e49e-6aaf-4e97-9f0e-4039061d12d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.741 186022 DEBUG nova.compute.manager [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Instance network_info: |[{"id": "8a773115-5cfe-4366-97f0-643e66599184", "address": "fa:16:3e:9e:4e:dc", "network": {"id": "aae0d8ab-f4c2-45a3-98ea-6057c14a083d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1267765400-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f530e5001be644ada25ea22d2fc918bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a773115-5c", "ovs_interfaceid": "8a773115-5cfe-4366-97f0-643e66599184", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.742 186022 DEBUG oslo_concurrency.lockutils [req-5a168d53-9673-46fc-8787-524502def1e6 req-7d652f33-ae26-4ad8-866a-8e50bc5966f1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-8123e49e-6aaf-4e97-9f0e-4039061d12d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.742 186022 DEBUG nova.network.neutron [req-5a168d53-9673-46fc-8787-524502def1e6 req-7d652f33-ae26-4ad8-866a-8e50bc5966f1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Refreshing network info cache for port 8a773115-5cfe-4366-97f0-643e66599184 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.744 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Start _get_guest_xml network_info=[{"id": "8a773115-5cfe-4366-97f0-643e66599184", "address": "fa:16:3e:9e:4e:dc", "network": {"id": "aae0d8ab-f4c2-45a3-98ea-6057c14a083d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1267765400-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f530e5001be644ada25ea22d2fc918bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a773115-5c", "ovs_interfaceid": "8a773115-5cfe-4366-97f0-643e66599184", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:29:29Z,direct_url=<?>,disk_format='qcow2',id=ebb2027f-05a6-465a-af75-b7da40a91332,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:29:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'encrypted': False, 'encryption_format': None, 'image_id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.750 186022 WARNING nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.759 186022 DEBUG nova.virt.libvirt.host [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.760 186022 DEBUG nova.virt.libvirt.host [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.765 186022 DEBUG nova.virt.libvirt.host [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.766 186022 DEBUG nova.virt.libvirt.host [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.766 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.766 186022 DEBUG nova.virt.hardware [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T21:29:28Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ce1138a2-4b82-4664-8860-711a956c0882',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:29:29Z,direct_url=<?>,disk_format='qcow2',id=ebb2027f-05a6-465a-af75-b7da40a91332,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:29:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.767 186022 DEBUG nova.virt.hardware [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.767 186022 DEBUG nova.virt.hardware [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.767 186022 DEBUG nova.virt.hardware [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.767 186022 DEBUG nova.virt.hardware [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.768 186022 DEBUG nova.virt.hardware [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.768 186022 DEBUG nova.virt.hardware [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.768 186022 DEBUG nova.virt.hardware [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.768 186022 DEBUG nova.virt.hardware [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.768 186022 DEBUG nova.virt.hardware [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.768 186022 DEBUG nova.virt.hardware [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.773 186022 DEBUG nova.virt.libvirt.vif [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:33:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1976684734',display_name='tempest-TestServerBasicOps-server-1976684734',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1976684734',id=10,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHH2qHQTdCMojQaAboVuJHZOo3UWBhUhPK+SxvS8rEHWVcJB4wATMh3Lnn5L4KoBVF1RMoE6cX5F41gAxeArXKiTxZK88pNt76pU5XoY2zaRV8Be3zK8C5dt0ZeQ3UH4eg==',key_name='tempest-TestServerBasicOps-1738816453',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f530e5001be644ada25ea22d2fc918bb',ramdisk_id='',reservation_id='r-ljq0ps0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-273363449',owner_user_name='tempest-TestServerBasicOps-273363449-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:33:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1b776719a870485db8e8ec3697bac537',uuid=8123e49e-6aaf-4e97-9f0e-4039061d12d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a773115-5cfe-4366-97f0-643e66599184", "address": "fa:16:3e:9e:4e:dc", "network": {"id": "aae0d8ab-f4c2-45a3-98ea-6057c14a083d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1267765400-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f530e5001be644ada25ea22d2fc918bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a773115-5c", "ovs_interfaceid": "8a773115-5cfe-4366-97f0-643e66599184", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.773 186022 DEBUG nova.network.os_vif_util [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Converting VIF {"id": "8a773115-5cfe-4366-97f0-643e66599184", "address": "fa:16:3e:9e:4e:dc", "network": {"id": "aae0d8ab-f4c2-45a3-98ea-6057c14a083d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1267765400-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f530e5001be644ada25ea22d2fc918bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a773115-5c", "ovs_interfaceid": "8a773115-5cfe-4366-97f0-643e66599184", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.774 186022 DEBUG nova.network.os_vif_util [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:4e:dc,bridge_name='br-int',has_traffic_filtering=True,id=8a773115-5cfe-4366-97f0-643e66599184,network=Network(aae0d8ab-f4c2-45a3-98ea-6057c14a083d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a773115-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.774 186022 DEBUG nova.objects.instance [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lazy-loading 'pci_devices' on Instance uuid 8123e49e-6aaf-4e97-9f0e-4039061d12d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.806 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] End _get_guest_xml xml=<domain type="kvm">
Jan 05 21:33:21 compute-0 nova_compute[186018]:   <uuid>8123e49e-6aaf-4e97-9f0e-4039061d12d3</uuid>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   <name>instance-0000000a</name>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   <memory>131072</memory>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   <vcpu>1</vcpu>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   <metadata>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <nova:name>tempest-TestServerBasicOps-server-1976684734</nova:name>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <nova:creationTime>2026-01-05 21:33:21</nova:creationTime>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <nova:flavor name="m1.nano">
Jan 05 21:33:21 compute-0 nova_compute[186018]:         <nova:memory>128</nova:memory>
Jan 05 21:33:21 compute-0 nova_compute[186018]:         <nova:disk>1</nova:disk>
Jan 05 21:33:21 compute-0 nova_compute[186018]:         <nova:swap>0</nova:swap>
Jan 05 21:33:21 compute-0 nova_compute[186018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 05 21:33:21 compute-0 nova_compute[186018]:         <nova:vcpus>1</nova:vcpus>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       </nova:flavor>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <nova:owner>
Jan 05 21:33:21 compute-0 nova_compute[186018]:         <nova:user uuid="1b776719a870485db8e8ec3697bac537">tempest-TestServerBasicOps-273363449-project-member</nova:user>
Jan 05 21:33:21 compute-0 nova_compute[186018]:         <nova:project uuid="f530e5001be644ada25ea22d2fc918bb">tempest-TestServerBasicOps-273363449</nova:project>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       </nova:owner>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <nova:root type="image" uuid="ebb2027f-05a6-465a-af75-b7da40a91332"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <nova:ports>
Jan 05 21:33:21 compute-0 nova_compute[186018]:         <nova:port uuid="8a773115-5cfe-4366-97f0-643e66599184">
Jan 05 21:33:21 compute-0 nova_compute[186018]:           <nova:ip type="fixed" address="10.100.0.6" ipVersion="4"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:         </nova:port>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       </nova:ports>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     </nova:instance>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   </metadata>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   <sysinfo type="smbios">
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <system>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <entry name="manufacturer">RDO</entry>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <entry name="product">OpenStack Compute</entry>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <entry name="serial">8123e49e-6aaf-4e97-9f0e-4039061d12d3</entry>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <entry name="uuid">8123e49e-6aaf-4e97-9f0e-4039061d12d3</entry>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <entry name="family">Virtual Machine</entry>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     </system>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   </sysinfo>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   <os>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <boot dev="hd"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <smbios mode="sysinfo"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   </os>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   <features>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <acpi/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <apic/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <vmcoreinfo/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   </features>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   <clock offset="utc">
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <timer name="hpet" present="no"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   </clock>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   <cpu mode="host-model" match="exact">
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   </cpu>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   <devices>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <target dev="vda" bus="virtio"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <disk type="file" device="cdrom">
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk.config"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <target dev="sda" bus="sata"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <interface type="ethernet">
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <mac address="fa:16:3e:9e:4e:dc"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <mtu size="1442"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <target dev="tap8a773115-5c"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     </interface>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <serial type="pty">
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <log file="/var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/console.log" append="off"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     </serial>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <video>
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     </video>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <input type="tablet" bus="usb"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <rng model="virtio">
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <backend model="random">/dev/urandom</backend>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     </rng>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <controller type="usb" index="0"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     <memballoon model="virtio">
Jan 05 21:33:21 compute-0 nova_compute[186018]:       <stats period="10"/>
Jan 05 21:33:21 compute-0 nova_compute[186018]:     </memballoon>
Jan 05 21:33:21 compute-0 nova_compute[186018]:   </devices>
Jan 05 21:33:21 compute-0 nova_compute[186018]: </domain>
Jan 05 21:33:21 compute-0 nova_compute[186018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.806 186022 DEBUG nova.compute.manager [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Preparing to wait for external event network-vif-plugged-8a773115-5cfe-4366-97f0-643e66599184 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.806 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Acquiring lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.806 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.807 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.807 186022 DEBUG nova.virt.libvirt.vif [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:33:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1976684734',display_name='tempest-TestServerBasicOps-server-1976684734',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1976684734',id=10,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHH2qHQTdCMojQaAboVuJHZOo3UWBhUhPK+SxvS8rEHWVcJB4wATMh3Lnn5L4KoBVF1RMoE6cX5F41gAxeArXKiTxZK88pNt76pU5XoY2zaRV8Be3zK8C5dt0ZeQ3UH4eg==',key_name='tempest-TestServerBasicOps-1738816453',keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='f530e5001be644ada25ea22d2fc918bb',ramdisk_id='',reservation_id='r-ljq0ps0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-TestServerBasicOps-273363449',owner_user_name='tempest-TestServerBasicOps-273363449-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:33:17Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1b776719a870485db8e8ec3697bac537',uuid=8123e49e-6aaf-4e97-9f0e-4039061d12d3,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "8a773115-5cfe-4366-97f0-643e66599184", "address": "fa:16:3e:9e:4e:dc", "network": {"id": "aae0d8ab-f4c2-45a3-98ea-6057c14a083d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1267765400-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f530e5001be644ada25ea22d2fc918bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a773115-5c", "ovs_interfaceid": "8a773115-5cfe-4366-97f0-643e66599184", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.808 186022 DEBUG nova.network.os_vif_util [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Converting VIF {"id": "8a773115-5cfe-4366-97f0-643e66599184", "address": "fa:16:3e:9e:4e:dc", "network": {"id": "aae0d8ab-f4c2-45a3-98ea-6057c14a083d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1267765400-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f530e5001be644ada25ea22d2fc918bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a773115-5c", "ovs_interfaceid": "8a773115-5cfe-4366-97f0-643e66599184", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.808 186022 DEBUG nova.network.os_vif_util [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:9e:4e:dc,bridge_name='br-int',has_traffic_filtering=True,id=8a773115-5cfe-4366-97f0-643e66599184,network=Network(aae0d8ab-f4c2-45a3-98ea-6057c14a083d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a773115-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.808 186022 DEBUG os_vif [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:4e:dc,bridge_name='br-int',has_traffic_filtering=True,id=8a773115-5cfe-4366-97f0-643e66599184,network=Network(aae0d8ab-f4c2-45a3-98ea-6057c14a083d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a773115-5c') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.810 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.810 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.811 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.815 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.815 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap8a773115-5c, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.816 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap8a773115-5c, col_values=(('external_ids', {'iface-id': '8a773115-5cfe-4366-97f0-643e66599184', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:9e:4e:dc', 'vm-uuid': '8123e49e-6aaf-4e97-9f0e-4039061d12d3'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.818 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:21 compute-0 NetworkManager[56598]: <info>  [1767648801.8189] manager: (tap8a773115-5c): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/52)
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.819 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.826 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.827 186022 INFO os_vif [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:9e:4e:dc,bridge_name='br-int',has_traffic_filtering=True,id=8a773115-5cfe-4366-97f0-643e66599184,network=Network(aae0d8ab-f4c2-45a3-98ea-6057c14a083d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a773115-5c')
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.904 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.905 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.905 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] No VIF found with MAC fa:16:3e:9e:4e:dc, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 21:33:21 compute-0 nova_compute[186018]: 2026-01-05 21:33:21.905 186022 INFO nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Using config drive
Jan 05 21:33:21 compute-0 podman[252500]: 2026-01-05 21:33:21.951137913 +0000 UTC m=+0.082847882 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:33:21 compute-0 podman[252499]: 2026-01-05 21:33:21.971798387 +0000 UTC m=+0.107677456 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 05 21:33:22 compute-0 nova_compute[186018]: 2026-01-05 21:33:22.363 186022 INFO nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Creating config drive at /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk.config
Jan 05 21:33:22 compute-0 nova_compute[186018]: 2026-01-05 21:33:22.370 186022 DEBUG oslo_concurrency.processutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjsvhlsiy execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:22 compute-0 nova_compute[186018]: 2026-01-05 21:33:22.499 186022 DEBUG oslo_concurrency.processutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpjsvhlsiy" returned: 0 in 0.129s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:22 compute-0 kernel: tap8a773115-5c: entered promiscuous mode
Jan 05 21:33:22 compute-0 NetworkManager[56598]: <info>  [1767648802.5784] manager: (tap8a773115-5c): new Tun device (/org/freedesktop/NetworkManager/Devices/53)
Jan 05 21:33:22 compute-0 ovn_controller[98229]: 2026-01-05T21:33:22Z|00115|binding|INFO|Claiming lport 8a773115-5cfe-4366-97f0-643e66599184 for this chassis.
Jan 05 21:33:22 compute-0 nova_compute[186018]: 2026-01-05 21:33:22.582 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:22 compute-0 ovn_controller[98229]: 2026-01-05T21:33:22Z|00116|binding|INFO|8a773115-5cfe-4366-97f0-643e66599184: Claiming fa:16:3e:9e:4e:dc 10.100.0.6
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.590 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:4e:dc 10.100.0.6'], port_security=['fa:16:3e:9e:4e:dc 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8123e49e-6aaf-4e97-9f0e-4039061d12d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aae0d8ab-f4c2-45a3-98ea-6057c14a083d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f530e5001be644ada25ea22d2fc918bb', 'neutron:revision_number': '2', 'neutron:security_group_ids': '095e9468-180c-4738-8a72-aee138b2c523 2c4e81e4-d89a-4021-a6bc-8babb492b41e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a181296e-c1b7-4d0e-85b2-ec2adaea4841, chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=8a773115-5cfe-4366-97f0-643e66599184) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.592 107689 INFO neutron.agent.ovn.metadata.agent [-] Port 8a773115-5cfe-4366-97f0-643e66599184 in datapath aae0d8ab-f4c2-45a3-98ea-6057c14a083d bound to our chassis
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.595 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network aae0d8ab-f4c2-45a3-98ea-6057c14a083d
Jan 05 21:33:22 compute-0 ovn_controller[98229]: 2026-01-05T21:33:22Z|00117|binding|INFO|Setting lport 8a773115-5cfe-4366-97f0-643e66599184 ovn-installed in OVS
Jan 05 21:33:22 compute-0 ovn_controller[98229]: 2026-01-05T21:33:22Z|00118|binding|INFO|Setting lport 8a773115-5cfe-4366-97f0-643e66599184 up in Southbound
Jan 05 21:33:22 compute-0 nova_compute[186018]: 2026-01-05 21:33:22.597 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:22 compute-0 nova_compute[186018]: 2026-01-05 21:33:22.608 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.617 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[0572f26b-516a-4b2e-a367-d8beb54fac24]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.619 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapaae0d8ab-f1 in ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.621 240489 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapaae0d8ab-f0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.622 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[6539abba-80bd-49e2-88e8-0f95e9938d50]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.623 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[a52fbac3-1dae-45e2-a72a-26e7c286ad09]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:22 compute-0 systemd-machined[157312]: New machine qemu-10-instance-0000000a.
Jan 05 21:33:22 compute-0 systemd[1]: Started Virtual Machine qemu-10-instance-0000000a.
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.649 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[db9fe64e-bdab-4a55-a203-57baf69d8c93]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:22 compute-0 systemd-udevd[252563]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.676 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[de76d261-a394-4eee-a3ca-17e686fb5024]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:22 compute-0 NetworkManager[56598]: <info>  [1767648802.6789] device (tap8a773115-5c): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 21:33:22 compute-0 NetworkManager[56598]: <info>  [1767648802.6797] device (tap8a773115-5c): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.716 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[50c4ded8-f9d0-4626-bcdf-8af76416960e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:22 compute-0 systemd-udevd[252566]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:33:22 compute-0 NetworkManager[56598]: <info>  [1767648802.7255] manager: (tapaae0d8ab-f0): new Veth device (/org/freedesktop/NetworkManager/Devices/54)
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.724 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[0e2afa6c-415b-41c9-85db-656d98d8c812]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.756 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[c75c1188-1b64-42e0-8104-ce68d33e589b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.759 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[e065fd28-d826-49a1-88eb-f68734a76c72]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:22 compute-0 NetworkManager[56598]: <info>  [1767648802.7829] device (tapaae0d8ab-f0): carrier: link connected
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.792 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[29cda485-b125-4be7-ae67-18625b131fda]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.812 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[10371b70-69ec-4fee-9aa7-9d3944587dcc]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaae0d8ab-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:07:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554348, 'reachable_time': 25126, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252594, 'error': None, 'target': 'ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.831 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[fd11f956-90c0-4a0c-9216-3a44a133fe1f]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe55:761'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 554348, 'tstamp': 554348}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252595, 'error': None, 'target': 'ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.849 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[98f426dc-e10e-413f-a6aa-3bc9342c0a7e]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapaae0d8ab-f1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:55:07:61'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 31], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554348, 'reachable_time': 25126, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252596, 'error': None, 'target': 'ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.887 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[57b72d62-1f81-46a1-a021-e98fba7e98d9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:22 compute-0 nova_compute[186018]: 2026-01-05 21:33:22.907 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.954 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[ea2477a7-49ad-4359-bc20-25efc4d15388]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.956 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaae0d8ab-f0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.956 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.956 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapaae0d8ab-f0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:22 compute-0 kernel: tapaae0d8ab-f0: entered promiscuous mode
Jan 05 21:33:22 compute-0 nova_compute[186018]: 2026-01-05 21:33:22.959 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:22 compute-0 NetworkManager[56598]: <info>  [1767648802.9601] manager: (tapaae0d8ab-f0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/55)
Jan 05 21:33:22 compute-0 nova_compute[186018]: 2026-01-05 21:33:22.962 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.964 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapaae0d8ab-f0, col_values=(('external_ids', {'iface-id': '42eee9ad-544f-46ae-a1ce-2e7fc398eff7'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:22 compute-0 ovn_controller[98229]: 2026-01-05T21:33:22Z|00119|binding|INFO|Releasing lport 42eee9ad-544f-46ae-a1ce-2e7fc398eff7 from this chassis (sb_readonly=0)
Jan 05 21:33:22 compute-0 nova_compute[186018]: 2026-01-05 21:33:22.965 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:22 compute-0 nova_compute[186018]: 2026-01-05 21:33:22.993 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.995 107689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/aae0d8ab-f4c2-45a3-98ea-6057c14a083d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/aae0d8ab-f4c2-45a3-98ea-6057c14a083d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.996 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[0a146f41-0bbb-4bda-8902-a840936336ee]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.997 107689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: global
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     log         /dev/log local0 debug
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     log-tag     haproxy-metadata-proxy-aae0d8ab-f4c2-45a3-98ea-6057c14a083d
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     user        root
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     group       root
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     maxconn     1024
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     pidfile     /var/lib/neutron/external/pids/aae0d8ab-f4c2-45a3-98ea-6057c14a083d.pid.haproxy
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     daemon
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: defaults
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     log global
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     mode http
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     option httplog
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     option dontlognull
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     option http-server-close
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     option forwardfor
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     retries                 3
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     timeout http-request    30s
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     timeout connect         30s
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     timeout client          32s
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     timeout server          32s
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     timeout http-keep-alive 30s
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: listen listener
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     bind 169.254.169.254:80
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     server metadata /var/lib/neutron/metadata_proxy
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:     http-request add-header X-OVN-Network-ID aae0d8ab-f4c2-45a3-98ea-6057c14a083d
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 05 21:33:22 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:22.997 107689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d', 'env', 'PROCESS_TAG=haproxy-aae0d8ab-f4c2-45a3-98ea-6057c14a083d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/aae0d8ab-f4c2-45a3-98ea-6057c14a083d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.056 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648803.0552957, 8123e49e-6aaf-4e97-9f0e-4039061d12d3 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.057 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] VM Started (Lifecycle Event)
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.083 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.090 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648803.055455, 8123e49e-6aaf-4e97-9f0e-4039061d12d3 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.091 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] VM Paused (Lifecycle Event)
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.107 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.113 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.135 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:33:23 compute-0 podman[252634]: 2026-01-05 21:33:23.438199645 +0000 UTC m=+0.084799334 container create bf524ce80e70ecfc5ade65ddabdfc1fa6f26a101faecdff95d71518e904e1717 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.485 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Skipping network cache update for instance because it is Building. _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9871
Jan 05 21:33:23 compute-0 podman[252634]: 2026-01-05 21:33:23.39735881 +0000 UTC m=+0.043958559 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 21:33:23 compute-0 systemd[1]: Started libpod-conmon-bf524ce80e70ecfc5ade65ddabdfc1fa6f26a101faecdff95d71518e904e1717.scope.
Jan 05 21:33:23 compute-0 systemd[1]: Started libcrun container.
Jan 05 21:33:23 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/7f7674c786b5ca6255937aa4e61e1ab6f5d31a3d3f19460eaf013802073e540d/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 21:33:23 compute-0 podman[252634]: 2026-01-05 21:33:23.572673985 +0000 UTC m=+0.219273694 container init bf524ce80e70ecfc5ade65ddabdfc1fa6f26a101faecdff95d71518e904e1717 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2)
Jan 05 21:33:23 compute-0 podman[252634]: 2026-01-05 21:33:23.582384621 +0000 UTC m=+0.228984310 container start bf524ce80e70ecfc5ade65ddabdfc1fa6f26a101faecdff95d71518e904e1717 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 05 21:33:23 compute-0 neutron-haproxy-ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d[252649]: [NOTICE]   (252653) : New worker (252655) forked
Jan 05 21:33:23 compute-0 neutron-haproxy-ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d[252649]: [NOTICE]   (252653) : Loading success.
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.779 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.779 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.780 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.780 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62f57876-af2d-4771-bffd-c87b7755cc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.820 186022 DEBUG nova.network.neutron [req-5a168d53-9673-46fc-8787-524502def1e6 req-7d652f33-ae26-4ad8-866a-8e50bc5966f1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Updated VIF entry in instance network info cache for port 8a773115-5cfe-4366-97f0-643e66599184. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.821 186022 DEBUG nova.network.neutron [req-5a168d53-9673-46fc-8787-524502def1e6 req-7d652f33-ae26-4ad8-866a-8e50bc5966f1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Updating instance_info_cache with network_info: [{"id": "8a773115-5cfe-4366-97f0-643e66599184", "address": "fa:16:3e:9e:4e:dc", "network": {"id": "aae0d8ab-f4c2-45a3-98ea-6057c14a083d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1267765400-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f530e5001be644ada25ea22d2fc918bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a773115-5c", "ovs_interfaceid": "8a773115-5cfe-4366-97f0-643e66599184", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.823 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:23 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:23.822 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=14, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=13) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:33:23 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:23.824 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 7 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:33:23 compute-0 nova_compute[186018]: 2026-01-05 21:33:23.834 186022 DEBUG oslo_concurrency.lockutils [req-5a168d53-9673-46fc-8787-524502def1e6 req-7d652f33-ae26-4ad8-866a-8e50bc5966f1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-8123e49e-6aaf-4e97-9f0e-4039061d12d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:33:25 compute-0 nova_compute[186018]: 2026-01-05 21:33:25.562 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updating instance_info_cache with network_info: [{"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:33:25 compute-0 nova_compute[186018]: 2026-01-05 21:33:25.583 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:33:25 compute-0 nova_compute[186018]: 2026-01-05 21:33:25.584 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:33:25 compute-0 nova_compute[186018]: 2026-01-05 21:33:25.584 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:33:25 compute-0 nova_compute[186018]: 2026-01-05 21:33:25.584 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:33:26 compute-0 nova_compute[186018]: 2026-01-05 21:33:26.819 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.024 186022 DEBUG nova.compute.manager [req-798da519-1612-4939-946b-d961b13d1933 req-22d86d08-9ffc-4d2e-9e91-679e616629e9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Received event network-vif-plugged-8a773115-5cfe-4366-97f0-643e66599184 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.025 186022 DEBUG oslo_concurrency.lockutils [req-798da519-1612-4939-946b-d961b13d1933 req-22d86d08-9ffc-4d2e-9e91-679e616629e9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.025 186022 DEBUG oslo_concurrency.lockutils [req-798da519-1612-4939-946b-d961b13d1933 req-22d86d08-9ffc-4d2e-9e91-679e616629e9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.025 186022 DEBUG oslo_concurrency.lockutils [req-798da519-1612-4939-946b-d961b13d1933 req-22d86d08-9ffc-4d2e-9e91-679e616629e9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.025 186022 DEBUG nova.compute.manager [req-798da519-1612-4939-946b-d961b13d1933 req-22d86d08-9ffc-4d2e-9e91-679e616629e9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Processing event network-vif-plugged-8a773115-5cfe-4366-97f0-643e66599184 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.026 186022 DEBUG nova.compute.manager [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Instance event wait completed in 3 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.032 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648807.032147, 8123e49e-6aaf-4e97-9f0e-4039061d12d3 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.032 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] VM Resumed (Lifecycle Event)
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.034 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.041 186022 INFO nova.virt.libvirt.driver [-] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Instance spawned successfully.
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.042 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.051 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.062 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.068 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.068 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.069 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.069 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.070 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.071 186022 DEBUG nova.virt.libvirt.driver [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.096 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.127 186022 INFO nova.compute.manager [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Took 9.73 seconds to spawn the instance on the hypervisor.
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.128 186022 DEBUG nova.compute.manager [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.194 186022 INFO nova.compute.manager [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Took 10.29 seconds to build instance.
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.226 186022 DEBUG oslo_concurrency.lockutils [None req-6fb3238f-9879-47b5-bc10-a493c672e2d1 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 10.404s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.465 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.467 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:33:27 compute-0 nova_compute[186018]: 2026-01-05 21:33:27.910 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:28 compute-0 nova_compute[186018]: 2026-01-05 21:33:28.469 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.100 186022 DEBUG nova.compute.manager [req-ff757fe1-a6d2-406f-858e-d977099fca0c req-a2349f5f-c7e7-46c7-a9ea-eeb20f5f08e0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Received event network-vif-plugged-8a773115-5cfe-4366-97f0-643e66599184 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.100 186022 DEBUG oslo_concurrency.lockutils [req-ff757fe1-a6d2-406f-858e-d977099fca0c req-a2349f5f-c7e7-46c7-a9ea-eeb20f5f08e0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.100 186022 DEBUG oslo_concurrency.lockutils [req-ff757fe1-a6d2-406f-858e-d977099fca0c req-a2349f5f-c7e7-46c7-a9ea-eeb20f5f08e0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.101 186022 DEBUG oslo_concurrency.lockutils [req-ff757fe1-a6d2-406f-858e-d977099fca0c req-a2349f5f-c7e7-46c7-a9ea-eeb20f5f08e0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.101 186022 DEBUG nova.compute.manager [req-ff757fe1-a6d2-406f-858e-d977099fca0c req-a2349f5f-c7e7-46c7-a9ea-eeb20f5f08e0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] No waiting events found dispatching network-vif-plugged-8a773115-5cfe-4366-97f0-643e66599184 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.101 186022 WARNING nova.compute.manager [req-ff757fe1-a6d2-406f-858e-d977099fca0c req-a2349f5f-c7e7-46c7-a9ea-eeb20f5f08e0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Received unexpected event network-vif-plugged-8a773115-5cfe-4366-97f0-643e66599184 for instance with vm_state active and task_state None.
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.489 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.490 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.491 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.491 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.603 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:29 compute-0 podman[252665]: 2026-01-05 21:33:29.611902055 +0000 UTC m=+0.060148324 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.676 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.677 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.734 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.744 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:29 compute-0 podman[202426]: time="2026-01-05T21:33:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:33:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:33:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 30973 "" "Go-http-client/1.1"
Jan 05 21:33:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:33:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5293 "" "Go-http-client/1.1"
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.822 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk --force-share --output=json" returned: 0 in 0.078s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.825 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.897 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.914 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:29 compute-0 ovn_controller[98229]: 2026-01-05T21:33:29Z|00120|memory|INFO|peak resident set size grew 52% in last 2787.2 seconds, from 16000 kB to 24380 kB
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.987 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:29 compute-0 ovn_controller[98229]: 2026-01-05T21:33:29Z|00121|memory|INFO|idl-cells-OVN_Southbound:10982 idl-cells-Open_vSwitch:927 if_status_mgr_ifaces_state_usage-KB:1 if_status_mgr_ifaces_usage-KB:1 lflow-cache-entries-cache-expr:390 lflow-cache-entries-cache-matches:296 lflow-cache-size-KB:1609 local_datapath_usage-KB:3 ofctrl_desired_flow_usage-KB:678 ofctrl_installed_flow_usage-KB:494 ofctrl_sb_flow_ref_usage-KB:257
Jan 05 21:33:29 compute-0 nova_compute[186018]: 2026-01-05 21:33:29.990 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:30 compute-0 nova_compute[186018]: 2026-01-05 21:33:30.060 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:30 compute-0 nova_compute[186018]: 2026-01-05 21:33:30.602 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:33:30 compute-0 nova_compute[186018]: 2026-01-05 21:33:30.604 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4860MB free_disk=72.3207015991211GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:33:30 compute-0 nova_compute[186018]: 2026-01-05 21:33:30.604 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:30 compute-0 nova_compute[186018]: 2026-01-05 21:33:30.605 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:30 compute-0 nova_compute[186018]: 2026-01-05 21:33:30.725 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:33:30 compute-0 nova_compute[186018]: 2026-01-05 21:33:30.726 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 1c4634a9-de38-4683-abb9-3964b285a21c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:33:30 compute-0 nova_compute[186018]: 2026-01-05 21:33:30.726 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 8123e49e-6aaf-4e97-9f0e-4039061d12d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:33:30 compute-0 nova_compute[186018]: 2026-01-05 21:33:30.727 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:33:30 compute-0 nova_compute[186018]: 2026-01-05 21:33:30.727 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:33:30 compute-0 nova_compute[186018]: 2026-01-05 21:33:30.806 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:33:30 compute-0 nova_compute[186018]: 2026-01-05 21:33:30.825 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:33:30 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:30.826 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '14'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:30 compute-0 nova_compute[186018]: 2026-01-05 21:33:30.851 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:33:30 compute-0 nova_compute[186018]: 2026-01-05 21:33:30.852 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.247s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:31 compute-0 nova_compute[186018]: 2026-01-05 21:33:31.200 186022 DEBUG nova.compute.manager [req-199b2ac4-2ffb-4b65-87ef-57a60735e16c req-ad0a6cb8-601e-4125-93e5-c54beb82b316 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Received event network-changed-8a773115-5cfe-4366-97f0-643e66599184 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:33:31 compute-0 nova_compute[186018]: 2026-01-05 21:33:31.200 186022 DEBUG nova.compute.manager [req-199b2ac4-2ffb-4b65-87ef-57a60735e16c req-ad0a6cb8-601e-4125-93e5-c54beb82b316 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Refreshing instance network info cache due to event network-changed-8a773115-5cfe-4366-97f0-643e66599184. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:33:31 compute-0 nova_compute[186018]: 2026-01-05 21:33:31.201 186022 DEBUG oslo_concurrency.lockutils [req-199b2ac4-2ffb-4b65-87ef-57a60735e16c req-ad0a6cb8-601e-4125-93e5-c54beb82b316 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-8123e49e-6aaf-4e97-9f0e-4039061d12d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:33:31 compute-0 nova_compute[186018]: 2026-01-05 21:33:31.201 186022 DEBUG oslo_concurrency.lockutils [req-199b2ac4-2ffb-4b65-87ef-57a60735e16c req-ad0a6cb8-601e-4125-93e5-c54beb82b316 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-8123e49e-6aaf-4e97-9f0e-4039061d12d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:33:31 compute-0 nova_compute[186018]: 2026-01-05 21:33:31.201 186022 DEBUG nova.network.neutron [req-199b2ac4-2ffb-4b65-87ef-57a60735e16c req-ad0a6cb8-601e-4125-93e5-c54beb82b316 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Refreshing network info cache for port 8a773115-5cfe-4366-97f0-643e66599184 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:33:31 compute-0 openstack_network_exporter[205720]: ERROR   21:33:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:33:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:33:31 compute-0 openstack_network_exporter[205720]: ERROR   21:33:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:33:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:33:31 compute-0 nova_compute[186018]: 2026-01-05 21:33:31.821 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:31 compute-0 nova_compute[186018]: 2026-01-05 21:33:31.853 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:33:31 compute-0 nova_compute[186018]: 2026-01-05 21:33:31.853 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:33:32 compute-0 nova_compute[186018]: 2026-01-05 21:33:32.469 186022 DEBUG nova.network.neutron [req-199b2ac4-2ffb-4b65-87ef-57a60735e16c req-ad0a6cb8-601e-4125-93e5-c54beb82b316 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Updated VIF entry in instance network info cache for port 8a773115-5cfe-4366-97f0-643e66599184. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:33:32 compute-0 nova_compute[186018]: 2026-01-05 21:33:32.470 186022 DEBUG nova.network.neutron [req-199b2ac4-2ffb-4b65-87ef-57a60735e16c req-ad0a6cb8-601e-4125-93e5-c54beb82b316 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Updating instance_info_cache with network_info: [{"id": "8a773115-5cfe-4366-97f0-643e66599184", "address": "fa:16:3e:9e:4e:dc", "network": {"id": "aae0d8ab-f4c2-45a3-98ea-6057c14a083d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1267765400-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f530e5001be644ada25ea22d2fc918bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a773115-5c", "ovs_interfaceid": "8a773115-5cfe-4366-97f0-643e66599184", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:33:32 compute-0 nova_compute[186018]: 2026-01-05 21:33:32.486 186022 DEBUG oslo_concurrency.lockutils [req-199b2ac4-2ffb-4b65-87ef-57a60735e16c req-ad0a6cb8-601e-4125-93e5-c54beb82b316 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-8123e49e-6aaf-4e97-9f0e-4039061d12d3" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:33:32 compute-0 nova_compute[186018]: 2026-01-05 21:33:32.794 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:32 compute-0 nova_compute[186018]: 2026-01-05 21:33:32.795 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:32 compute-0 nova_compute[186018]: 2026-01-05 21:33:32.819 186022 DEBUG nova.compute.manager [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 21:33:32 compute-0 nova_compute[186018]: 2026-01-05 21:33:32.906 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:32 compute-0 nova_compute[186018]: 2026-01-05 21:33:32.908 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:32 compute-0 nova_compute[186018]: 2026-01-05 21:33:32.914 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:32 compute-0 nova_compute[186018]: 2026-01-05 21:33:32.920 186022 DEBUG nova.virt.hardware [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 21:33:32 compute-0 nova_compute[186018]: 2026-01-05 21:33:32.921 186022 INFO nova.compute.claims [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Claim successful on node compute-0.ctlplane.example.com
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.062 186022 DEBUG nova.compute.provider_tree [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.077 186022 DEBUG nova.scheduler.client.report [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.098 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.099 186022 DEBUG nova.compute.manager [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.167 186022 DEBUG nova.compute.manager [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.168 186022 DEBUG nova.network.neutron [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.189 186022 INFO nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.215 186022 DEBUG nova.compute.manager [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.308 186022 DEBUG nova.compute.manager [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.309 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.310 186022 INFO nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Creating image(s)
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.311 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "/var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.311 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "/var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.312 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "/var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.312 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "6132ba58e89e5b8de27dca23fb9f4769d454fe9f" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.313 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "6132ba58e89e5b8de27dca23fb9f4769d454fe9f" acquired by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:33 compute-0 nova_compute[186018]: 2026-01-05 21:33:33.436 186022 DEBUG nova.policy [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4adc8921daaf44d4b88d43bd5764da44', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d77496083304392a3bddf3b3cc09d6f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 05 21:33:34 compute-0 nova_compute[186018]: 2026-01-05 21:33:34.181 186022 DEBUG nova.network.neutron [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Successfully created port: d05ce4e7-0fd8-4cf1-8711-f2a049118a41 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 05 21:33:34 compute-0 nova_compute[186018]: 2026-01-05 21:33:34.800 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f.part --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:34 compute-0 nova_compute[186018]: 2026-01-05 21:33:34.861 186022 DEBUG nova.network.neutron [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Successfully updated port: d05ce4e7-0fd8-4cf1-8711-f2a049118a41 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 21:33:34 compute-0 nova_compute[186018]: 2026-01-05 21:33:34.882 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:33:34 compute-0 nova_compute[186018]: 2026-01-05 21:33:34.883 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquired lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:33:34 compute-0 nova_compute[186018]: 2026-01-05 21:33:34.884 186022 DEBUG nova.network.neutron [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 21:33:34 compute-0 nova_compute[186018]: 2026-01-05 21:33:34.906 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f.part --force-share --output=json" returned: 0 in 0.106s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:34 compute-0 nova_compute[186018]: 2026-01-05 21:33:34.907 186022 DEBUG nova.virt.images [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] be6cfe06-61ed-4c76-8e1d-bc9df6929005 was qcow2, converting to raw fetch_to_raw /usr/lib/python3.9/site-packages/nova/virt/images.py:242
Jan 05 21:33:34 compute-0 nova_compute[186018]: 2026-01-05 21:33:34.909 186022 DEBUG nova.privsep.utils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Path '/var/lib/nova/instances' supports direct I/O supports_direct_io /usr/lib/python3.9/site-packages/nova/privsep/utils.py:63
Jan 05 21:33:34 compute-0 nova_compute[186018]: 2026-01-05 21:33:34.910 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f.part /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f.converted execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:34 compute-0 nova_compute[186018]: 2026-01-05 21:33:34.950 186022 DEBUG nova.compute.manager [req-2b9a22ec-985e-4757-b976-2322b0f6210e req-de58d75a-018b-4b15-a2f6-0639c24a0ca6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Received event network-changed-d05ce4e7-0fd8-4cf1-8711-f2a049118a41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:33:34 compute-0 nova_compute[186018]: 2026-01-05 21:33:34.952 186022 DEBUG nova.compute.manager [req-2b9a22ec-985e-4757-b976-2322b0f6210e req-de58d75a-018b-4b15-a2f6-0639c24a0ca6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Refreshing instance network info cache due to event network-changed-d05ce4e7-0fd8-4cf1-8711-f2a049118a41. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:33:34 compute-0 nova_compute[186018]: 2026-01-05 21:33:34.953 186022 DEBUG oslo_concurrency.lockutils [req-2b9a22ec-985e-4757-b976-2322b0f6210e req-de58d75a-018b-4b15-a2f6-0639c24a0ca6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.053 186022 DEBUG nova.network.neutron [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.225 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "qemu-img convert -t none -O raw -f qcow2 /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f.part /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f.converted" returned: 0 in 0.316s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.229 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f.converted --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.311 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f.converted --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.314 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "6132ba58e89e5b8de27dca23fb9f4769d454fe9f" "released" by "nova.virt.libvirt.imagebackend.Image.cache.<locals>.fetch_func_sync" :: held 2.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.343 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.423 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json" returned: 0 in 0.080s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.424 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "6132ba58e89e5b8de27dca23fb9f4769d454fe9f" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.425 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "6132ba58e89e5b8de27dca23fb9f4769d454fe9f" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.444 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.513 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.515 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f,backing_fmt=raw /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.563 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f,backing_fmt=raw /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk 1073741824" returned: 0 in 0.048s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.564 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "6132ba58e89e5b8de27dca23fb9f4769d454fe9f" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.139s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.565 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.623 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.625 186022 DEBUG nova.virt.disk.api [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Checking if we can resize image /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.626 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.688 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.690 186022 DEBUG nova.virt.disk.api [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Cannot resize image /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.691 186022 DEBUG nova.objects.instance [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lazy-loading 'migration_context' on Instance uuid fe15eddf-ceea-4584-95df-dc1ea54e3c25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.709 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.710 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Ensure instance console log exists: /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.712 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.713 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:35 compute-0 nova_compute[186018]: 2026-01-05 21:33:35.713 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:36 compute-0 nova_compute[186018]: 2026-01-05 21:33:36.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:33:36 compute-0 podman[252738]: 2026-01-05 21:33:36.773100365 +0000 UTC m=+0.108355183 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:33:36 compute-0 podman[252737]: 2026-01-05 21:33:36.775895499 +0000 UTC m=+0.117822733 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., name=ubi9, release=1214.1726694543, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, io.buildah.version=1.29.0, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, managed_by=edpm_ansible, container_name=kepler, io.openshift.tags=base rhel9, release-0.7.12=, version=9.4, architecture=x86_64, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, vendor=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Jan 05 21:33:36 compute-0 nova_compute[186018]: 2026-01-05 21:33:36.826 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:37 compute-0 nova_compute[186018]: 2026-01-05 21:33:37.919 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.612 186022 DEBUG nova.network.neutron [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Updating instance_info_cache with network_info: [{"id": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "address": "fa:16:3e:f6:00:12", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd05ce4e7-0f", "ovs_interfaceid": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.641 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Releasing lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.642 186022 DEBUG nova.compute.manager [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Instance network_info: |[{"id": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "address": "fa:16:3e:f6:00:12", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd05ce4e7-0f", "ovs_interfaceid": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.643 186022 DEBUG oslo_concurrency.lockutils [req-2b9a22ec-985e-4757-b976-2322b0f6210e req-de58d75a-018b-4b15-a2f6-0639c24a0ca6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.643 186022 DEBUG nova.network.neutron [req-2b9a22ec-985e-4757-b976-2322b0f6210e req-de58d75a-018b-4b15-a2f6-0639c24a0ca6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Refreshing network info cache for port d05ce4e7-0fd8-4cf1-8711-f2a049118a41 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.647 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Start _get_guest_xml network_info=[{"id": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "address": "fa:16:3e:f6:00:12", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd05ce4e7-0f", "ovs_interfaceid": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:33:24Z,direct_url=<?>,disk_format='qcow2',id=be6cfe06-61ed-4c76-8e1d-bc9df6929005,min_disk=0,min_ram=0,name='tempest-scenario-img--1998831437',owner='0d77496083304392a3bddf3b3cc09d6f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:33:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'encrypted': False, 'encryption_format': None, 'image_id': 'be6cfe06-61ed-4c76-8e1d-bc9df6929005'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.663 186022 WARNING nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.670 186022 DEBUG nova.virt.libvirt.host [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.672 186022 DEBUG nova.virt.libvirt.host [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.679 186022 DEBUG nova.virt.libvirt.host [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.680 186022 DEBUG nova.virt.libvirt.host [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.680 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.681 186022 DEBUG nova.virt.hardware [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T21:29:28Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ce1138a2-4b82-4664-8860-711a956c0882',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:33:24Z,direct_url=<?>,disk_format='qcow2',id=be6cfe06-61ed-4c76-8e1d-bc9df6929005,min_disk=0,min_ram=0,name='tempest-scenario-img--1998831437',owner='0d77496083304392a3bddf3b3cc09d6f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:33:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.681 186022 DEBUG nova.virt.hardware [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.682 186022 DEBUG nova.virt.hardware [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.682 186022 DEBUG nova.virt.hardware [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.683 186022 DEBUG nova.virt.hardware [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.683 186022 DEBUG nova.virt.hardware [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.684 186022 DEBUG nova.virt.hardware [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.684 186022 DEBUG nova.virt.hardware [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.685 186022 DEBUG nova.virt.hardware [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.686 186022 DEBUG nova.virt.hardware [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.686 186022 DEBUG nova.virt.hardware [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.691 186022 DEBUG nova.virt.libvirt.vif [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:33:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy',id=11,image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='592ac083-4e5e-4ede-94dc-941b228764d4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d77496083304392a3bddf3b3cc09d6f',ramdisk_id='',reservation_id='r-n5lr03o8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1091853177',owner_user_name='tempest-PrometheusGabbiTest-1091853177-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:33:33Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='4adc8921daaf44d4b88d43bd5764da44',uuid=fe15eddf-ceea-4584-95df-dc1ea54e3c25,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "address": "fa:16:3e:f6:00:12", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd05ce4e7-0f", "ovs_interfaceid": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.692 186022 DEBUG nova.network.os_vif_util [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converting VIF {"id": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "address": "fa:16:3e:f6:00:12", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd05ce4e7-0f", "ovs_interfaceid": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.693 186022 DEBUG nova.network.os_vif_util [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:00:12,bridge_name='br-int',has_traffic_filtering=True,id=d05ce4e7-0fd8-4cf1-8711-f2a049118a41,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd05ce4e7-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.695 186022 DEBUG nova.objects.instance [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lazy-loading 'pci_devices' on Instance uuid fe15eddf-ceea-4584-95df-dc1ea54e3c25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.713 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] End _get_guest_xml xml=<domain type="kvm">
Jan 05 21:33:38 compute-0 nova_compute[186018]:   <uuid>fe15eddf-ceea-4584-95df-dc1ea54e3c25</uuid>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   <name>instance-0000000b</name>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   <memory>131072</memory>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   <vcpu>1</vcpu>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   <metadata>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <nova:name>te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy</nova:name>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <nova:creationTime>2026-01-05 21:33:38</nova:creationTime>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <nova:flavor name="m1.nano">
Jan 05 21:33:38 compute-0 nova_compute[186018]:         <nova:memory>128</nova:memory>
Jan 05 21:33:38 compute-0 nova_compute[186018]:         <nova:disk>1</nova:disk>
Jan 05 21:33:38 compute-0 nova_compute[186018]:         <nova:swap>0</nova:swap>
Jan 05 21:33:38 compute-0 nova_compute[186018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 05 21:33:38 compute-0 nova_compute[186018]:         <nova:vcpus>1</nova:vcpus>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       </nova:flavor>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <nova:owner>
Jan 05 21:33:38 compute-0 nova_compute[186018]:         <nova:user uuid="4adc8921daaf44d4b88d43bd5764da44">tempest-PrometheusGabbiTest-1091853177-project-member</nova:user>
Jan 05 21:33:38 compute-0 nova_compute[186018]:         <nova:project uuid="0d77496083304392a3bddf3b3cc09d6f">tempest-PrometheusGabbiTest-1091853177</nova:project>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       </nova:owner>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <nova:root type="image" uuid="be6cfe06-61ed-4c76-8e1d-bc9df6929005"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <nova:ports>
Jan 05 21:33:38 compute-0 nova_compute[186018]:         <nova:port uuid="d05ce4e7-0fd8-4cf1-8711-f2a049118a41">
Jan 05 21:33:38 compute-0 nova_compute[186018]:           <nova:ip type="fixed" address="10.100.0.203" ipVersion="4"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:         </nova:port>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       </nova:ports>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     </nova:instance>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   </metadata>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   <sysinfo type="smbios">
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <system>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <entry name="manufacturer">RDO</entry>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <entry name="product">OpenStack Compute</entry>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <entry name="serial">fe15eddf-ceea-4584-95df-dc1ea54e3c25</entry>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <entry name="uuid">fe15eddf-ceea-4584-95df-dc1ea54e3c25</entry>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <entry name="family">Virtual Machine</entry>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     </system>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   </sysinfo>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   <os>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <boot dev="hd"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <smbios mode="sysinfo"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   </os>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   <features>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <acpi/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <apic/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <vmcoreinfo/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   </features>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   <clock offset="utc">
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <timer name="hpet" present="no"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   </clock>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   <cpu mode="host-model" match="exact">
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   </cpu>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   <devices>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <target dev="vda" bus="virtio"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <disk type="file" device="cdrom">
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.config"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <target dev="sda" bus="sata"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <interface type="ethernet">
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <mac address="fa:16:3e:f6:00:12"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <mtu size="1442"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <target dev="tapd05ce4e7-0f"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     </interface>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <serial type="pty">
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <log file="/var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/console.log" append="off"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     </serial>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <video>
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     </video>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <input type="tablet" bus="usb"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <rng model="virtio">
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <backend model="random">/dev/urandom</backend>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     </rng>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <controller type="usb" index="0"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     <memballoon model="virtio">
Jan 05 21:33:38 compute-0 nova_compute[186018]:       <stats period="10"/>
Jan 05 21:33:38 compute-0 nova_compute[186018]:     </memballoon>
Jan 05 21:33:38 compute-0 nova_compute[186018]:   </devices>
Jan 05 21:33:38 compute-0 nova_compute[186018]: </domain>
Jan 05 21:33:38 compute-0 nova_compute[186018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.714 186022 DEBUG nova.compute.manager [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Preparing to wait for external event network-vif-plugged-d05ce4e7-0fd8-4cf1-8711-f2a049118a41 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.715 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.716 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.716 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.717 186022 DEBUG nova.virt.libvirt.vif [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:33:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy',id=11,image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='592ac083-4e5e-4ede-94dc-941b228764d4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d77496083304392a3bddf3b3cc09d6f',ramdisk_id='',reservation_id='r-n5lr03o8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1091853177',owner_user_name='tempest-PrometheusGabbiTest-1091853177-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:33:33Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='4adc8921daaf44d4b88d43bd5764da44',uuid=fe15eddf-ceea-4584-95df-dc1ea54e3c25,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "address": "fa:16:3e:f6:00:12", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd05ce4e7-0f", "ovs_interfaceid": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.717 186022 DEBUG nova.network.os_vif_util [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converting VIF {"id": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "address": "fa:16:3e:f6:00:12", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd05ce4e7-0f", "ovs_interfaceid": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.721 186022 DEBUG nova.network.os_vif_util [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:f6:00:12,bridge_name='br-int',has_traffic_filtering=True,id=d05ce4e7-0fd8-4cf1-8711-f2a049118a41,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd05ce4e7-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.721 186022 DEBUG os_vif [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:00:12,bridge_name='br-int',has_traffic_filtering=True,id=d05ce4e7-0fd8-4cf1-8711-f2a049118a41,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd05ce4e7-0f') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.722 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.723 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.724 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.728 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.729 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd05ce4e7-0f, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.729 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd05ce4e7-0f, col_values=(('external_ids', {'iface-id': 'd05ce4e7-0fd8-4cf1-8711-f2a049118a41', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:00:12', 'vm-uuid': 'fe15eddf-ceea-4584-95df-dc1ea54e3c25'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.732 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:38 compute-0 NetworkManager[56598]: <info>  [1767648818.7354] manager: (tapd05ce4e7-0f): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/56)
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.736 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.743 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.744 186022 INFO os_vif [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:f6:00:12,bridge_name='br-int',has_traffic_filtering=True,id=d05ce4e7-0fd8-4cf1-8711-f2a049118a41,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd05ce4e7-0f')
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.798 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.798 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.798 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] No VIF found with MAC fa:16:3e:f6:00:12, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 21:33:38 compute-0 nova_compute[186018]: 2026-01-05 21:33:38.799 186022 INFO nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Using config drive
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.047 186022 INFO nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Creating config drive at /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.config
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.062 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqk5t8x3r execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.210 186022 DEBUG oslo_concurrency.processutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpqk5t8x3r" returned: 0 in 0.149s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:40 compute-0 kernel: tapd05ce4e7-0f: entered promiscuous mode
Jan 05 21:33:40 compute-0 NetworkManager[56598]: <info>  [1767648820.3486] manager: (tapd05ce4e7-0f): new Tun device (/org/freedesktop/NetworkManager/Devices/57)
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.351 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:40 compute-0 ovn_controller[98229]: 2026-01-05T21:33:40Z|00122|binding|INFO|Claiming lport d05ce4e7-0fd8-4cf1-8711-f2a049118a41 for this chassis.
Jan 05 21:33:40 compute-0 ovn_controller[98229]: 2026-01-05T21:33:40Z|00123|binding|INFO|d05ce4e7-0fd8-4cf1-8711-f2a049118a41: Claiming fa:16:3e:f6:00:12 10.100.0.203
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.391 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:00:12 10.100.0.203'], port_security=['fa:16:3e:f6:00:12 10.100.0.203'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.203/16', 'neutron:device_id': 'fe15eddf-ceea-4584-95df-dc1ea54e3c25', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d77496083304392a3bddf3b3cc09d6f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6045589-62d6-4436-a4e5-3eada182f76e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5730d3f-9ce0-49ab-a945-1714805ce7f9, chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=d05ce4e7-0fd8-4cf1-8711-f2a049118a41) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.392 107689 INFO neutron.agent.ovn.metadata.agent [-] Port d05ce4e7-0fd8-4cf1-8711-f2a049118a41 in datapath cfd3046a-c974-4a8e-be8e-0c5c965904ab bound to our chassis
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.395 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfd3046a-c974-4a8e-be8e-0c5c965904ab
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.412 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[4bf3eeaa-ede6-4cae-9fee-0b61a46b01fe]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.414 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tapcfd3046a-c1 in ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.416 240489 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tapcfd3046a-c0 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.416 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[88787a4c-5760-488d-b69f-dc36ee6caa98]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.417 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[c862e064-dfa7-48af-9293-cfd8823dcd11]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.421 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:40 compute-0 systemd-udevd[252808]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.434 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[d9a13e5b-f3b4-41a9-a22e-240ffe0da41b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:40 compute-0 NetworkManager[56598]: <info>  [1767648820.4399] device (tapd05ce4e7-0f): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 21:33:40 compute-0 NetworkManager[56598]: <info>  [1767648820.4412] device (tapd05ce4e7-0f): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 21:33:40 compute-0 ovn_controller[98229]: 2026-01-05T21:33:40Z|00124|binding|INFO|Setting lport d05ce4e7-0fd8-4cf1-8711-f2a049118a41 ovn-installed in OVS
Jan 05 21:33:40 compute-0 ovn_controller[98229]: 2026-01-05T21:33:40Z|00125|binding|INFO|Setting lport d05ce4e7-0fd8-4cf1-8711-f2a049118a41 up in Southbound
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.445 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:40 compute-0 systemd-machined[157312]: New machine qemu-11-instance-0000000b.
Jan 05 21:33:40 compute-0 systemd[1]: Started Virtual Machine qemu-11-instance-0000000b.
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.466 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[418298bf-7c6e-487e-a775-8f39a5af8c5d]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:40 compute-0 podman[252789]: 2026-01-05 21:33:40.482799223 +0000 UTC m=+0.143673534 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.498 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[e05fb20f-e8fd-45d9-baf0-51376c8bdda0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.505 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[2a331f4f-f3df-4d96-839e-8c1c11e76de7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:40 compute-0 systemd-udevd[252812]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:33:40 compute-0 NetworkManager[56598]: <info>  [1767648820.5070] manager: (tapcfd3046a-c0): new Veth device (/org/freedesktop/NetworkManager/Devices/58)
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.536 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[c006189d-d6d5-4b3a-84b3-93df7f34ea91]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.550 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[7c715e48-ab2e-4ada-bda6-3950e0cb6c77]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:40 compute-0 NetworkManager[56598]: <info>  [1767648820.5792] device (tapcfd3046a-c0): carrier: link connected
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.590 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[0d5bd505-fead-44b8-b47d-078383391f1b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.610 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[c78d9220-204d-4199-9f00-38ba560b756b]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfd3046a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:25:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556128, 'reachable_time': 16567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 252849, 'error': None, 'target': 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.637 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[c5e203a6-35e5-4904-a09e-05cd5a218de4]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fe9f:257c'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 556128, 'tstamp': 556128}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 252850, 'error': None, 'target': 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.663 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[4197bdbf-6882-4d6d-a0a5-071a2e2f5d7f]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfd3046a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:25:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556128, 'reachable_time': 16567, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 252851, 'error': None, 'target': 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.720 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[4cf1ccc3-cf76-4697-b6b4-cb23fb13910a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.793 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[bfe15401-2dc6-4c6d-bf01-fe4f7e10d802]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.795 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfd3046a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.795 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.796 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfd3046a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.798 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:40 compute-0 kernel: tapcfd3046a-c0: entered promiscuous mode
Jan 05 21:33:40 compute-0 NetworkManager[56598]: <info>  [1767648820.7993] manager: (tapcfd3046a-c0): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/59)
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.805 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.806 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfd3046a-c0, col_values=(('external_ids', {'iface-id': '68b7e7cf-3a36-4106-85be-cc39d85ff653'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.807 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:40 compute-0 ovn_controller[98229]: 2026-01-05T21:33:40Z|00126|binding|INFO|Releasing lport 68b7e7cf-3a36-4106-85be-cc39d85ff653 from this chassis (sb_readonly=0)
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.828 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.830 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.832 107689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/cfd3046a-c974-4a8e-be8e-0c5c965904ab.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/cfd3046a-c974-4a8e-be8e-0c5c965904ab.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.833 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[d2d601a3-7b25-4c59-9f4a-be4d9c44e0ba]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.835 107689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: global
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     log         /dev/log local0 debug
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     log-tag     haproxy-metadata-proxy-cfd3046a-c974-4a8e-be8e-0c5c965904ab
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     user        root
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     group       root
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     maxconn     1024
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     pidfile     /var/lib/neutron/external/pids/cfd3046a-c974-4a8e-be8e-0c5c965904ab.pid.haproxy
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     daemon
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: defaults
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     log global
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     mode http
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     option httplog
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     option dontlognull
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     option http-server-close
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     option forwardfor
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     retries                 3
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     timeout http-request    30s
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     timeout connect         30s
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     timeout client          32s
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     timeout server          32s
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     timeout http-keep-alive 30s
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: listen listener
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     bind 169.254.169.254:80
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     server metadata /var/lib/neutron/metadata_proxy
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:     http-request add-header X-OVN-Network-ID cfd3046a-c974-4a8e-be8e-0c5c965904ab
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 05 21:33:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:40.836 107689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'env', 'PROCESS_TAG=haproxy-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/cfd3046a-c974-4a8e-be8e-0c5c965904ab.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.933 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648820.9325373, fe15eddf-ceea-4584-95df-dc1ea54e3c25 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.933 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] VM Started (Lifecycle Event)
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.958 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.968 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648820.93264, fe15eddf-ceea-4584-95df-dc1ea54e3c25 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.968 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] VM Paused (Lifecycle Event)
Jan 05 21:33:40 compute-0 nova_compute[186018]: 2026-01-05 21:33:40.994 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.001 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.033 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:33:41 compute-0 podman[252890]: 2026-01-05 21:33:41.300213294 +0000 UTC m=+0.084362592 container create 9b9431a5469e14933ea3c179cc32548e931ad8d2d6c5bdc9dbde22e0668a945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true)
Jan 05 21:33:41 compute-0 podman[252890]: 2026-01-05 21:33:41.255284191 +0000 UTC m=+0.039433509 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 21:33:41 compute-0 systemd[1]: Started libpod-conmon-9b9431a5469e14933ea3c179cc32548e931ad8d2d6c5bdc9dbde22e0668a945e.scope.
Jan 05 21:33:41 compute-0 systemd[1]: Started libcrun container.
Jan 05 21:33:41 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/2750673adb764cef734147431fa120d99146f8cc04e7f186b5e132a3548e49ad/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 21:33:41 compute-0 podman[252890]: 2026-01-05 21:33:41.502987223 +0000 UTC m=+0.287136541 container init 9b9431a5469e14933ea3c179cc32548e931ad8d2d6c5bdc9dbde22e0668a945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0)
Jan 05 21:33:41 compute-0 podman[252890]: 2026-01-05 21:33:41.511102027 +0000 UTC m=+0.295251315 container start 9b9431a5469e14933ea3c179cc32548e931ad8d2d6c5bdc9dbde22e0668a945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.518 186022 DEBUG nova.network.neutron [req-2b9a22ec-985e-4757-b976-2322b0f6210e req-de58d75a-018b-4b15-a2f6-0639c24a0ca6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Updated VIF entry in instance network info cache for port d05ce4e7-0fd8-4cf1-8711-f2a049118a41. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.518 186022 DEBUG nova.network.neutron [req-2b9a22ec-985e-4757-b976-2322b0f6210e req-de58d75a-018b-4b15-a2f6-0639c24a0ca6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Updating instance_info_cache with network_info: [{"id": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "address": "fa:16:3e:f6:00:12", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd05ce4e7-0f", "ovs_interfaceid": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.536 186022 DEBUG oslo_concurrency.lockutils [req-2b9a22ec-985e-4757-b976-2322b0f6210e req-de58d75a-018b-4b15-a2f6-0639c24a0ca6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:33:41 compute-0 neutron-haproxy-ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab[252905]: [NOTICE]   (252909) : New worker (252911) forked
Jan 05 21:33:41 compute-0 neutron-haproxy-ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab[252905]: [NOTICE]   (252909) : Loading success.
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.612 186022 DEBUG nova.compute.manager [req-db02bd82-2fcd-420e-b829-1febc610e5e2 req-dd96bd6c-9695-4f1b-8252-2f1a055523e6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Received event network-vif-plugged-d05ce4e7-0fd8-4cf1-8711-f2a049118a41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.613 186022 DEBUG oslo_concurrency.lockutils [req-db02bd82-2fcd-420e-b829-1febc610e5e2 req-dd96bd6c-9695-4f1b-8252-2f1a055523e6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.613 186022 DEBUG oslo_concurrency.lockutils [req-db02bd82-2fcd-420e-b829-1febc610e5e2 req-dd96bd6c-9695-4f1b-8252-2f1a055523e6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.613 186022 DEBUG oslo_concurrency.lockutils [req-db02bd82-2fcd-420e-b829-1febc610e5e2 req-dd96bd6c-9695-4f1b-8252-2f1a055523e6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.613 186022 DEBUG nova.compute.manager [req-db02bd82-2fcd-420e-b829-1febc610e5e2 req-dd96bd6c-9695-4f1b-8252-2f1a055523e6 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Processing event network-vif-plugged-d05ce4e7-0fd8-4cf1-8711-f2a049118a41 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.614 186022 DEBUG nova.compute.manager [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.620 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648821.62065, fe15eddf-ceea-4584-95df-dc1ea54e3c25 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.621 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] VM Resumed (Lifecycle Event)
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.623 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.628 186022 INFO nova.virt.libvirt.driver [-] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Instance spawned successfully.
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.628 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.895 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.902 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.902 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.903 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.903 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.904 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.904 186022 DEBUG nova.virt.libvirt.driver [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.912 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.945 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.979 186022 INFO nova.compute.manager [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Took 8.67 seconds to spawn the instance on the hypervisor.
Jan 05 21:33:41 compute-0 nova_compute[186018]: 2026-01-05 21:33:41.979 186022 DEBUG nova.compute.manager [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:33:42 compute-0 nova_compute[186018]: 2026-01-05 21:33:42.038 186022 INFO nova.compute.manager [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Took 9.15 seconds to build instance.
Jan 05 21:33:42 compute-0 nova_compute[186018]: 2026-01-05 21:33:42.050 186022 DEBUG oslo_concurrency.lockutils [None req-48118def-66e4-4fee-a41b-8f0e26d7f14d 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 9.255s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:42.872 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:42.873 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:42.875 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:42 compute-0 nova_compute[186018]: 2026-01-05 21:33:42.921 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:43 compute-0 nova_compute[186018]: 2026-01-05 21:33:43.696 186022 DEBUG nova.compute.manager [req-1a4037bb-171a-40a0-a0bd-e686f4eb4e8e req-ddfc6655-fb67-4028-84fd-ab017ac950d9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Received event network-vif-plugged-d05ce4e7-0fd8-4cf1-8711-f2a049118a41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:33:43 compute-0 nova_compute[186018]: 2026-01-05 21:33:43.696 186022 DEBUG oslo_concurrency.lockutils [req-1a4037bb-171a-40a0-a0bd-e686f4eb4e8e req-ddfc6655-fb67-4028-84fd-ab017ac950d9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:43 compute-0 nova_compute[186018]: 2026-01-05 21:33:43.696 186022 DEBUG oslo_concurrency.lockutils [req-1a4037bb-171a-40a0-a0bd-e686f4eb4e8e req-ddfc6655-fb67-4028-84fd-ab017ac950d9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:43 compute-0 nova_compute[186018]: 2026-01-05 21:33:43.696 186022 DEBUG oslo_concurrency.lockutils [req-1a4037bb-171a-40a0-a0bd-e686f4eb4e8e req-ddfc6655-fb67-4028-84fd-ab017ac950d9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:43 compute-0 nova_compute[186018]: 2026-01-05 21:33:43.696 186022 DEBUG nova.compute.manager [req-1a4037bb-171a-40a0-a0bd-e686f4eb4e8e req-ddfc6655-fb67-4028-84fd-ab017ac950d9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] No waiting events found dispatching network-vif-plugged-d05ce4e7-0fd8-4cf1-8711-f2a049118a41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:33:43 compute-0 nova_compute[186018]: 2026-01-05 21:33:43.697 186022 WARNING nova.compute.manager [req-1a4037bb-171a-40a0-a0bd-e686f4eb4e8e req-ddfc6655-fb67-4028-84fd-ab017ac950d9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Received unexpected event network-vif-plugged-d05ce4e7-0fd8-4cf1-8711-f2a049118a41 for instance with vm_state active and task_state None.
Jan 05 21:33:43 compute-0 nova_compute[186018]: 2026-01-05 21:33:43.732 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:47 compute-0 nova_compute[186018]: 2026-01-05 21:33:47.923 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:48 compute-0 nova_compute[186018]: 2026-01-05 21:33:48.735 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:48 compute-0 podman[252922]: 2026-01-05 21:33:48.776986663 +0000 UTC m=+0.116823747 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, version=9.6, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.openshift.tags=minimal rhel9, name=ubi9-minimal, architecture=x86_64, release=1755695350)
Jan 05 21:33:48 compute-0 podman[252921]: 2026-01-05 21:33:48.820721064 +0000 UTC m=+0.155529396 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0)
Jan 05 21:33:52 compute-0 podman[252965]: 2026-01-05 21:33:52.761002503 +0000 UTC m=+0.093584065 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 21:33:52 compute-0 podman[252966]: 2026-01-05 21:33:52.783187647 +0000 UTC m=+0.107330337 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:33:52 compute-0 nova_compute[186018]: 2026-01-05 21:33:52.927 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:54 compute-0 nova_compute[186018]: 2026-01-05 21:33:54.091 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:54 compute-0 nova_compute[186018]: 2026-01-05 21:33:54.451 186022 DEBUG oslo_concurrency.lockutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Acquiring lock "1c4634a9-de38-4683-abb9-3964b285a21c" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:54 compute-0 nova_compute[186018]: 2026-01-05 21:33:54.451 186022 DEBUG oslo_concurrency.lockutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c" acquired by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:54 compute-0 nova_compute[186018]: 2026-01-05 21:33:54.453 186022 INFO nova.compute.manager [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Rebooting instance
Jan 05 21:33:54 compute-0 nova_compute[186018]: 2026-01-05 21:33:54.468 186022 DEBUG oslo_concurrency.lockutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Acquiring lock "refresh_cache-1c4634a9-de38-4683-abb9-3964b285a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:33:54 compute-0 nova_compute[186018]: 2026-01-05 21:33:54.469 186022 DEBUG oslo_concurrency.lockutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Acquired lock "refresh_cache-1c4634a9-de38-4683-abb9-3964b285a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:33:54 compute-0 nova_compute[186018]: 2026-01-05 21:33:54.470 186022 DEBUG nova.network.neutron [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 21:33:55 compute-0 nova_compute[186018]: 2026-01-05 21:33:55.714 186022 DEBUG nova.network.neutron [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Updating instance_info_cache with network_info: [{"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:33:55 compute-0 nova_compute[186018]: 2026-01-05 21:33:55.734 186022 DEBUG oslo_concurrency.lockutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Releasing lock "refresh_cache-1c4634a9-de38-4683-abb9-3964b285a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:33:55 compute-0 nova_compute[186018]: 2026-01-05 21:33:55.737 186022 DEBUG nova.compute.manager [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:33:56 compute-0 kernel: tapcecba75e-30 (unregistering): left promiscuous mode
Jan 05 21:33:56 compute-0 NetworkManager[56598]: <info>  [1767648836.1159] device (tapcecba75e-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.123 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 ovn_controller[98229]: 2026-01-05T21:33:56Z|00127|binding|INFO|Releasing lport cecba75e-30de-46e3-9539-c1911e784f2d from this chassis (sb_readonly=0)
Jan 05 21:33:56 compute-0 ovn_controller[98229]: 2026-01-05T21:33:56Z|00128|binding|INFO|Setting lport cecba75e-30de-46e3-9539-c1911e784f2d down in Southbound
Jan 05 21:33:56 compute-0 ovn_controller[98229]: 2026-01-05T21:33:56Z|00129|binding|INFO|Removing iface tapcecba75e-30 ovn-installed in OVS
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.128 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:56.139 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:93:1b 10.100.0.4'], port_security=['fa:16:3e:f6:93:1b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1c4634a9-de38-4683-abb9-3964b285a21c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d140934-6988-43f2-b45f-49e5cf3de4b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5efd2bd3d0424bd99bd88ac5bfe7d457', 'neutron:revision_number': '4', 'neutron:security_group_ids': '842e8104-5a29-4d14-99fa-0f1362c35beb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dc7cb32-4733-47ef-890a-22095c3cd6b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=cecba75e-30de-46e3-9539-c1911e784f2d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.141 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:56.142 107689 INFO neutron.agent.ovn.metadata.agent [-] Port cecba75e-30de-46e3-9539-c1911e784f2d in datapath 9d140934-6988-43f2-b45f-49e5cf3de4b0 unbound from our chassis
Jan 05 21:33:56 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:56.145 107689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9d140934-6988-43f2-b45f-49e5cf3de4b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 05 21:33:56 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:56.146 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[32ca4b6f-461f-4218-88b2-927dac209a23]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:56 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:56.147 107689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0 namespace which is not needed anymore
Jan 05 21:33:56 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 05 21:33:56 compute-0 systemd[1]: machine-qemu\x2d9\x2dinstance\x2d00000009.scope: Consumed 41.180s CPU time.
Jan 05 21:33:56 compute-0 systemd-machined[157312]: Machine qemu-9-instance-00000009 terminated.
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.304 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.311 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.358 186022 INFO nova.virt.libvirt.driver [-] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Instance destroyed successfully.
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.359 186022 DEBUG nova.objects.instance [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lazy-loading 'resources' on Instance uuid 1c4634a9-de38-4683-abb9-3964b285a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.373 186022 DEBUG nova.virt.libvirt.vif [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:32:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1019046137',display_name='tempest-ServerActionsTestJSON-server-1019046137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1019046137',id=9,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCdsX/VW/otw2+baeo241R2QhmkVDaN24udXgw5ga/G5VloNjKs7iKGi9GFFfjKokOQxQ2hPiWL3KkIRK7GQwJhLRoUKXUhkfvs1aUx6Mef7xFXtmjR0ROHB22gCQ/YkTw==',key_name='tempest-keypair-962693419',keypairs=<?>,launch_index=0,launched_at=2026-01-05T21:32:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5efd2bd3d0424bd99bd88ac5bfe7d457',ramdisk_id='',reservation_id='r-y4vmxuzn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-578788577',owner_user_name='tempest-ServerActionsTestJSON-578788577-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T21:33:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7c73fe2d06da4c34ab29da3c61a0989e',uuid=1c4634a9-de38-4683-abb9-3964b285a21c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.373 186022 DEBUG nova.network.os_vif_util [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Converting VIF {"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.374 186022 DEBUG nova.network.os_vif_util [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:93:1b,bridge_name='br-int',has_traffic_filtering=True,id=cecba75e-30de-46e3-9539-c1911e784f2d,network=Network(9d140934-6988-43f2-b45f-49e5cf3de4b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecba75e-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.374 186022 DEBUG os_vif [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:93:1b,bridge_name='br-int',has_traffic_filtering=True,id=cecba75e-30de-46e3-9539-c1911e784f2d,network=Network(9d140934-6988-43f2-b45f-49e5cf3de4b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecba75e-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.376 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.377 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcecba75e-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.379 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.380 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.382 186022 INFO os_vif [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:93:1b,bridge_name='br-int',has_traffic_filtering=True,id=cecba75e-30de-46e3-9539-c1911e784f2d,network=Network(9d140934-6988-43f2-b45f-49e5cf3de4b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecba75e-30')
Jan 05 21:33:56 compute-0 neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0[252219]: [NOTICE]   (252223) : haproxy version is 2.8.14-c23fe91
Jan 05 21:33:56 compute-0 neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0[252219]: [NOTICE]   (252223) : path to executable is /usr/sbin/haproxy
Jan 05 21:33:56 compute-0 neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0[252219]: [WARNING]  (252223) : Exiting Master process...
Jan 05 21:33:56 compute-0 neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0[252219]: [WARNING]  (252223) : Exiting Master process...
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.389 186022 DEBUG nova.virt.libvirt.driver [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Start _get_guest_xml network_info=[{"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=ebb2027f-05a6-465a-af75-b7da40a91332,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'encrypted': False, 'encryption_format': None, 'image_id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 21:33:56 compute-0 neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0[252219]: [ALERT]    (252223) : Current worker (252225) exited with code 143 (Terminated)
Jan 05 21:33:56 compute-0 neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0[252219]: [WARNING]  (252223) : All workers exited. Exiting... (0)
Jan 05 21:33:56 compute-0 systemd[1]: libpod-28b7beb23a578b4341d2bdef0c63729a67ea7db3a684055ca8a87c9cca62fbd0.scope: Deactivated successfully.
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.396 186022 WARNING nova.virt.libvirt.driver [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:33:56 compute-0 podman[253027]: 2026-01-05 21:33:56.401354686 +0000 UTC m=+0.103692521 container died 28b7beb23a578b4341d2bdef0c63729a67ea7db3a684055ca8a87c9cca62fbd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.404 186022 DEBUG nova.virt.libvirt.host [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.404 186022 DEBUG nova.virt.libvirt.host [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.412 186022 DEBUG nova.virt.libvirt.host [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.412 186022 DEBUG nova.virt.libvirt.host [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.413 186022 DEBUG nova.virt.libvirt.driver [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.413 186022 DEBUG nova.virt.hardware [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T21:29:28Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ce1138a2-4b82-4664-8860-711a956c0882',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum=<?>,container_format='bare',created_at=<?>,direct_url=<?>,disk_format='qcow2',id=ebb2027f-05a6-465a-af75-b7da40a91332,min_disk=1,min_ram=0,name=<?>,owner=<?>,properties=ImageMetaProps,protected=<?>,size=<?>,status=<?>,tags=<?>,updated_at=<?>,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.413 186022 DEBUG nova.virt.hardware [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.413 186022 DEBUG nova.virt.hardware [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.413 186022 DEBUG nova.virt.hardware [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.414 186022 DEBUG nova.virt.hardware [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.414 186022 DEBUG nova.virt.hardware [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.414 186022 DEBUG nova.virt.hardware [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.414 186022 DEBUG nova.virt.hardware [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.414 186022 DEBUG nova.virt.hardware [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.414 186022 DEBUG nova.virt.hardware [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.415 186022 DEBUG nova.virt.hardware [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.415 186022 DEBUG nova.objects.instance [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lazy-loading 'vcpu_model' on Instance uuid 1c4634a9-de38-4683-abb9-3964b285a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.433 186022 DEBUG oslo_concurrency.processutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk.config --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:56 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-28b7beb23a578b4341d2bdef0c63729a67ea7db3a684055ca8a87c9cca62fbd0-userdata-shm.mount: Deactivated successfully.
Jan 05 21:33:56 compute-0 systemd[1]: var-lib-containers-storage-overlay-db9694e3caf6296d1c45c9af61c49d800e9f0e65beaa6405eeb0a11a15582ed9-merged.mount: Deactivated successfully.
Jan 05 21:33:56 compute-0 podman[253027]: 2026-01-05 21:33:56.453964021 +0000 UTC m=+0.156301856 container cleanup 28b7beb23a578b4341d2bdef0c63729a67ea7db3a684055ca8a87c9cca62fbd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 05 21:33:56 compute-0 systemd[1]: libpod-conmon-28b7beb23a578b4341d2bdef0c63729a67ea7db3a684055ca8a87c9cca62fbd0.scope: Deactivated successfully.
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.502 186022 DEBUG oslo_concurrency.processutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk.config --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.503 186022 DEBUG oslo_concurrency.lockutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Acquiring lock "/var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.503 186022 DEBUG oslo_concurrency.lockutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "/var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.504 186022 DEBUG oslo_concurrency.lockutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "/var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.505 186022 DEBUG nova.virt.libvirt.vif [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:32:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1019046137',display_name='tempest-ServerActionsTestJSON-server-1019046137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1019046137',id=9,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCdsX/VW/otw2+baeo241R2QhmkVDaN24udXgw5ga/G5VloNjKs7iKGi9GFFfjKokOQxQ2hPiWL3KkIRK7GQwJhLRoUKXUhkfvs1aUx6Mef7xFXtmjR0ROHB22gCQ/YkTw==',key_name='tempest-keypair-962693419',keypairs=<?>,launch_index=0,launched_at=2026-01-05T21:32:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5efd2bd3d0424bd99bd88ac5bfe7d457',ramdisk_id='',reservation_id='r-y4vmxuzn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-578788577',owner_user_name='tempest-ServerActionsTestJSON-578788577-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T21:33:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7c73fe2d06da4c34ab29da3c61a0989e',uuid=1c4634a9-de38-4683-abb9-3964b285a21c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.506 186022 DEBUG nova.network.os_vif_util [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Converting VIF {"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.510 186022 DEBUG nova.network.os_vif_util [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:93:1b,bridge_name='br-int',has_traffic_filtering=True,id=cecba75e-30de-46e3-9539-c1911e784f2d,network=Network(9d140934-6988-43f2-b45f-49e5cf3de4b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecba75e-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.513 186022 DEBUG nova.objects.instance [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lazy-loading 'pci_devices' on Instance uuid 1c4634a9-de38-4683-abb9-3964b285a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.539 186022 DEBUG nova.virt.libvirt.driver [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] End _get_guest_xml xml=<domain type="kvm">
Jan 05 21:33:56 compute-0 nova_compute[186018]:   <uuid>1c4634a9-de38-4683-abb9-3964b285a21c</uuid>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   <name>instance-00000009</name>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   <memory>131072</memory>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   <vcpu>1</vcpu>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   <metadata>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <nova:name>tempest-ServerActionsTestJSON-server-1019046137</nova:name>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <nova:creationTime>2026-01-05 21:33:56</nova:creationTime>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <nova:flavor name="m1.nano">
Jan 05 21:33:56 compute-0 nova_compute[186018]:         <nova:memory>128</nova:memory>
Jan 05 21:33:56 compute-0 nova_compute[186018]:         <nova:disk>1</nova:disk>
Jan 05 21:33:56 compute-0 nova_compute[186018]:         <nova:swap>0</nova:swap>
Jan 05 21:33:56 compute-0 nova_compute[186018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 05 21:33:56 compute-0 nova_compute[186018]:         <nova:vcpus>1</nova:vcpus>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       </nova:flavor>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <nova:owner>
Jan 05 21:33:56 compute-0 nova_compute[186018]:         <nova:user uuid="7c73fe2d06da4c34ab29da3c61a0989e">tempest-ServerActionsTestJSON-578788577-project-member</nova:user>
Jan 05 21:33:56 compute-0 nova_compute[186018]:         <nova:project uuid="5efd2bd3d0424bd99bd88ac5bfe7d457">tempest-ServerActionsTestJSON-578788577</nova:project>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       </nova:owner>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <nova:root type="image" uuid="ebb2027f-05a6-465a-af75-b7da40a91332"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <nova:ports>
Jan 05 21:33:56 compute-0 nova_compute[186018]:         <nova:port uuid="cecba75e-30de-46e3-9539-c1911e784f2d">
Jan 05 21:33:56 compute-0 nova_compute[186018]:           <nova:ip type="fixed" address="10.100.0.4" ipVersion="4"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:         </nova:port>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       </nova:ports>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     </nova:instance>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   </metadata>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   <sysinfo type="smbios">
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <system>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <entry name="manufacturer">RDO</entry>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <entry name="product">OpenStack Compute</entry>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <entry name="serial">1c4634a9-de38-4683-abb9-3964b285a21c</entry>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <entry name="uuid">1c4634a9-de38-4683-abb9-3964b285a21c</entry>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <entry name="family">Virtual Machine</entry>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     </system>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   </sysinfo>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   <os>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <boot dev="hd"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <smbios mode="sysinfo"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   </os>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   <features>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <acpi/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <apic/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <vmcoreinfo/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   </features>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   <clock offset="utc">
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <timer name="hpet" present="no"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   </clock>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   <cpu mode="host-model" match="exact">
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   </cpu>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   <devices>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <target dev="vda" bus="virtio"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <disk type="file" device="cdrom">
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk.config"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <target dev="sda" bus="sata"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <interface type="ethernet">
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <mac address="fa:16:3e:f6:93:1b"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <mtu size="1442"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <target dev="tapcecba75e-30"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     </interface>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <serial type="pty">
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <log file="/var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/console.log" append="off"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     </serial>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <video>
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     </video>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <input type="tablet" bus="usb"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <input type="keyboard" bus="usb"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <rng model="virtio">
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <backend model="random">/dev/urandom</backend>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     </rng>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <controller type="usb" index="0"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     <memballoon model="virtio">
Jan 05 21:33:56 compute-0 nova_compute[186018]:       <stats period="10"/>
Jan 05 21:33:56 compute-0 nova_compute[186018]:     </memballoon>
Jan 05 21:33:56 compute-0 nova_compute[186018]:   </devices>
Jan 05 21:33:56 compute-0 nova_compute[186018]: </domain>
Jan 05 21:33:56 compute-0 nova_compute[186018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 21:33:56 compute-0 podman[253069]: 2026-01-05 21:33:56.548142041 +0000 UTC m=+0.068206507 container remove 28b7beb23a578b4341d2bdef0c63729a67ea7db3a684055ca8a87c9cca62fbd0 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.548 186022 DEBUG oslo_concurrency.processutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:56 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:56.567 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[f0d641e2-f0bb-418c-a62a-0a831f14f410]: (4, ('Mon Jan  5 09:33:56 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0 (28b7beb23a578b4341d2bdef0c63729a67ea7db3a684055ca8a87c9cca62fbd0)\n28b7beb23a578b4341d2bdef0c63729a67ea7db3a684055ca8a87c9cca62fbd0\nMon Jan  5 09:33:56 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0 (28b7beb23a578b4341d2bdef0c63729a67ea7db3a684055ca8a87c9cca62fbd0)\n28b7beb23a578b4341d2bdef0c63729a67ea7db3a684055ca8a87c9cca62fbd0\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:56 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:56.574 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[f2226abc-6b22-4d84-9919-7c4c17040510]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:56 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:56.575 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d140934-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:56 compute-0 kernel: tap9d140934-60: left promiscuous mode
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.577 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.581 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:56.591 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[66dff090-48b1-4791-b667-c595049c508a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.613 186022 DEBUG nova.compute.manager [req-43cb2e15-2358-43cb-a51c-0e1d74831572 req-f8174378-eb3b-4f37-8897-ea17c6caaf1e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received event network-vif-unplugged-cecba75e-30de-46e3-9539-c1911e784f2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.614 186022 DEBUG oslo_concurrency.lockutils [req-43cb2e15-2358-43cb-a51c-0e1d74831572 req-f8174378-eb3b-4f37-8897-ea17c6caaf1e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:56 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:56.616 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[21c5b82f-9c50-41fe-b444-139705bb0f35]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.616 186022 DEBUG oslo_concurrency.lockutils [req-43cb2e15-2358-43cb-a51c-0e1d74831572 req-f8174378-eb3b-4f37-8897-ea17c6caaf1e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.617 186022 DEBUG oslo_concurrency.lockutils [req-43cb2e15-2358-43cb-a51c-0e1d74831572 req-f8174378-eb3b-4f37-8897-ea17c6caaf1e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.617 186022 DEBUG nova.compute.manager [req-43cb2e15-2358-43cb-a51c-0e1d74831572 req-f8174378-eb3b-4f37-8897-ea17c6caaf1e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] No waiting events found dispatching network-vif-unplugged-cecba75e-30de-46e3-9539-c1911e784f2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:33:56 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:56.618 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[e7c9bbc2-e329-4f1f-9708-0e21a99cab7a]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.618 186022 WARNING nova.compute.manager [req-43cb2e15-2358-43cb-a51c-0e1d74831572 req-f8174378-eb3b-4f37-8897-ea17c6caaf1e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received unexpected event network-vif-unplugged-cecba75e-30de-46e3-9539-c1911e784f2d for instance with vm_state active and task_state reboot_started_hard.
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.619 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.621 186022 DEBUG oslo_concurrency.processutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.621 186022 DEBUG oslo_concurrency.processutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:56 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:56.644 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[45959952-494f-4c79-8e71-2590f8b58f2a]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 550522, 'reachable_time': 32533, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253088, 'error': None, 'target': 'ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:56 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:56.647 108136 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 05 21:33:56 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:56.648 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[3ca401ee-7c44-4b37-9ba2-a634a6f82bf4]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:56 compute-0 systemd[1]: run-netns-ovnmeta\x2d9d140934\x2d6988\x2d43f2\x2db45f\x2d49e5cf3de4b0.mount: Deactivated successfully.
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.717 186022 DEBUG oslo_concurrency.processutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.719 186022 DEBUG nova.objects.instance [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lazy-loading 'trusted_certs' on Instance uuid 1c4634a9-de38-4683-abb9-3964b285a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.733 186022 DEBUG oslo_concurrency.processutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.788 186022 DEBUG oslo_concurrency.processutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.055s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.789 186022 DEBUG nova.virt.disk.api [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Checking if we can resize image /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.789 186022 DEBUG oslo_concurrency.processutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.846 186022 DEBUG oslo_concurrency.processutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.847 186022 DEBUG nova.virt.disk.api [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Cannot resize image /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.847 186022 DEBUG nova.objects.instance [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lazy-loading 'migration_context' on Instance uuid 1c4634a9-de38-4683-abb9-3964b285a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.862 186022 DEBUG nova.virt.libvirt.vif [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:32:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1019046137',display_name='tempest-ServerActionsTestJSON-server-1019046137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1019046137',id=9,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCdsX/VW/otw2+baeo241R2QhmkVDaN24udXgw5ga/G5VloNjKs7iKGi9GFFfjKokOQxQ2hPiWL3KkIRK7GQwJhLRoUKXUhkfvs1aUx6Mef7xFXtmjR0ROHB22gCQ/YkTw==',key_name='tempest-keypair-962693419',keypairs=<?>,launch_index=0,launched_at=2026-01-05T21:32:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=<?>,power_state=1,progress=0,project_id='5efd2bd3d0424bd99bd88ac5bfe7d457',ramdisk_id='',reservation_id='r-y4vmxuzn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-578788577',owner_user_name='tempest-ServerActionsTestJSON-578788577-project-member'},tags=<?>,task_state='reboot_started_hard',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:33:55Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7c73fe2d06da4c34ab29da3c61a0989e',uuid=1c4634a9-de38-4683-abb9-3964b285a21c,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.862 186022 DEBUG nova.network.os_vif_util [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Converting VIF {"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.863 186022 DEBUG nova.network.os_vif_util [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:93:1b,bridge_name='br-int',has_traffic_filtering=True,id=cecba75e-30de-46e3-9539-c1911e784f2d,network=Network(9d140934-6988-43f2-b45f-49e5cf3de4b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecba75e-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.863 186022 DEBUG os_vif [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Plugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:93:1b,bridge_name='br-int',has_traffic_filtering=True,id=cecba75e-30de-46e3-9539-c1911e784f2d,network=Network(9d140934-6988-43f2-b45f-49e5cf3de4b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecba75e-30') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.864 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.864 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.864 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.867 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.867 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcecba75e-30, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.867 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapcecba75e-30, col_values=(('external_ids', {'iface-id': 'cecba75e-30de-46e3-9539-c1911e784f2d', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:f6:93:1b', 'vm-uuid': '1c4634a9-de38-4683-abb9-3964b285a21c'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.869 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 NetworkManager[56598]: <info>  [1767648836.8703] manager: (tapcecba75e-30): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/60)
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.873 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.878 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.879 186022 INFO os_vif [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Successfully plugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:93:1b,bridge_name='br-int',has_traffic_filtering=True,id=cecba75e-30de-46e3-9539-c1911e784f2d,network=Network(9d140934-6988-43f2-b45f-49e5cf3de4b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecba75e-30')
Jan 05 21:33:56 compute-0 kernel: tapcecba75e-30: entered promiscuous mode
Jan 05 21:33:56 compute-0 systemd-udevd[253009]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:33:56 compute-0 NetworkManager[56598]: <info>  [1767648836.9738] manager: (tapcecba75e-30): new Tun device (/org/freedesktop/NetworkManager/Devices/61)
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.973 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 ovn_controller[98229]: 2026-01-05T21:33:56Z|00130|binding|INFO|Claiming lport cecba75e-30de-46e3-9539-c1911e784f2d for this chassis.
Jan 05 21:33:56 compute-0 ovn_controller[98229]: 2026-01-05T21:33:56Z|00131|binding|INFO|cecba75e-30de-46e3-9539-c1911e784f2d: Claiming fa:16:3e:f6:93:1b 10.100.0.4
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.979 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 NetworkManager[56598]: <info>  [1767648836.9884] device (tapcecba75e-30): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 21:33:56 compute-0 nova_compute[186018]: 2026-01-05 21:33:56.988 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:56 compute-0 NetworkManager[56598]: <info>  [1767648836.9931] device (tapcecba75e-30): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.001 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:93:1b 10.100.0.4'], port_security=['fa:16:3e:f6:93:1b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1c4634a9-de38-4683-abb9-3964b285a21c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d140934-6988-43f2-b45f-49e5cf3de4b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5efd2bd3d0424bd99bd88ac5bfe7d457', 'neutron:revision_number': '5', 'neutron:security_group_ids': '842e8104-5a29-4d14-99fa-0f1362c35beb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.233'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dc7cb32-4733-47ef-890a-22095c3cd6b2, chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=cecba75e-30de-46e3-9539-c1911e784f2d) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.004 107689 INFO neutron.agent.ovn.metadata.agent [-] Port cecba75e-30de-46e3-9539-c1911e784f2d in datapath 9d140934-6988-43f2-b45f-49e5cf3de4b0 bound to our chassis
Jan 05 21:33:57 compute-0 ovn_controller[98229]: 2026-01-05T21:33:57Z|00132|binding|INFO|Setting lport cecba75e-30de-46e3-9539-c1911e784f2d ovn-installed in OVS
Jan 05 21:33:57 compute-0 ovn_controller[98229]: 2026-01-05T21:33:57Z|00133|binding|INFO|Setting lport cecba75e-30de-46e3-9539-c1911e784f2d up in Southbound
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.010 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 9d140934-6988-43f2-b45f-49e5cf3de4b0
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.012 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.025 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[700847ab-21b3-4174-8502-ca519eea87da]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.026 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap9d140934-61 in ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0 namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 05 21:33:57 compute-0 systemd-machined[157312]: New machine qemu-12-instance-00000009.
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.029 240489 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap9d140934-60 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.029 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[3fdac1f6-6aab-4203-bf36-80e979712585]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.030 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[94e12d88-56e9-4b7c-9753-b6e44fb16be4]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.041 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[8ad2d9b8-1990-4d39-a634-9872def04b8f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:57 compute-0 systemd[1]: Started Virtual Machine qemu-12-instance-00000009.
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.067 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[8e6e7695-1727-423e-8c47-b953a705cd79]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.112 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[f54ee98e-4ca0-4177-9642-8301fb9b8a73]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:57 compute-0 NetworkManager[56598]: <info>  [1767648837.1196] manager: (tap9d140934-60): new Veth device (/org/freedesktop/NetworkManager/Devices/62)
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.118 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[a30303f7-88c7-4381-a94c-b3b6b59893ea]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.163 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[0f0754df-de77-432c-a5a9-c305a36d2278]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.167 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[ad8db0a0-435b-4f80-a3c6-58868e4ec6b8]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:57 compute-0 NetworkManager[56598]: <info>  [1767648837.1941] device (tap9d140934-60): carrier: link connected
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.206 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[d86ad0c6-32d7-4afb-94a3-abbedebf726d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.244 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[66ee9ca1-1ac6-41da-8161-a5f7feb87e12]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d140934-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:28:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557789, 'reachable_time': 37158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253144, 'error': None, 'target': 'ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.263 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[ebe16278-09ec-4f9b-a52b-6f08977f94ce]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fed5:285f'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 557789, 'tstamp': 557789}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253145, 'error': None, 'target': 'ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.285 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[513471ae-3b44-4a8d-a746-8ecaa17dd65a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap9d140934-61'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:d5:28:5f'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 36], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557789, 'reachable_time': 37158, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253146, 'error': None, 'target': 'ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.327 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[97f01f15-47d8-4fc1-85a1-d0871744f59f]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.395 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[3e88c5a8-71be-4337-b1f1-9e76323a14ec]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.397 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d140934-60, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.397 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.398 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap9d140934-60, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.400 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:57 compute-0 NetworkManager[56598]: <info>  [1767648837.4014] manager: (tap9d140934-60): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/63)
Jan 05 21:33:57 compute-0 kernel: tap9d140934-60: entered promiscuous mode
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.405 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.411 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap9d140934-60, col_values=(('external_ids', {'iface-id': '0fbb4d95-b8f2-4898-a3d0-8390d76218f2'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.413 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:57 compute-0 ovn_controller[98229]: 2026-01-05T21:33:57Z|00134|binding|INFO|Releasing lport 0fbb4d95-b8f2-4898-a3d0-8390d76218f2 from this chassis (sb_readonly=0)
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.414 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.415 107689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/9d140934-6988-43f2-b45f-49e5cf3de4b0.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/9d140934-6988-43f2-b45f-49e5cf3de4b0.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.425 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.424 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[bc5aaf6a-4825-4d8b-82c3-fb26b271af24]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.428 107689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: global
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     log         /dev/log local0 debug
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     log-tag     haproxy-metadata-proxy-9d140934-6988-43f2-b45f-49e5cf3de4b0
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     user        root
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     group       root
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     maxconn     1024
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     pidfile     /var/lib/neutron/external/pids/9d140934-6988-43f2-b45f-49e5cf3de4b0.pid.haproxy
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     daemon
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: defaults
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     log global
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     mode http
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     option httplog
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     option dontlognull
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     option http-server-close
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     option forwardfor
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     retries                 3
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     timeout http-request    30s
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     timeout connect         30s
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     timeout client          32s
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     timeout server          32s
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     timeout http-keep-alive 30s
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: listen listener
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     bind 169.254.169.254:80
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     server metadata /var/lib/neutron/metadata_proxy
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:     http-request add-header X-OVN-Network-ID 9d140934-6988-43f2-b45f-49e5cf3de4b0
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 05 21:33:57 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:33:57.428 107689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0', 'env', 'PROCESS_TAG=haproxy-9d140934-6988-43f2-b45f-49e5cf3de4b0', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/9d140934-6988-43f2-b45f-49e5cf3de4b0.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.600 186022 DEBUG nova.virt.libvirt.host [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Removed pending event for 1c4634a9-de38-4683-abb9-3964b285a21c due to event _event_emit_delayed /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:438
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.600 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648837.5994806, 1c4634a9-de38-4683-abb9-3964b285a21c => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.600 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] VM Resumed (Lifecycle Event)
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.612 186022 DEBUG nova.compute.manager [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Instance event wait completed in 0 seconds for  wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.623 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.625 186022 INFO nova.virt.libvirt.driver [-] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Instance rebooted successfully.
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.626 186022 DEBUG nova.compute.manager [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.633 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.667 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] During sync_power_state the instance has a pending task (reboot_started_hard). Skip.
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.668 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648837.6120977, 1c4634a9-de38-4683-abb9-3964b285a21c => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.668 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] VM Started (Lifecycle Event)
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.696 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.704 186022 DEBUG oslo_concurrency.lockutils [None req-6aad7bd8-6122-4d90-804a-236a8704ea17 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c" "released" by "nova.compute.manager.ComputeManager.reboot_instance.<locals>.do_reboot_instance" :: held 3.252s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.708 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Synchronizing instance power state after lifecycle event "Started"; current vm_state: active, current task_state: reboot_started_hard, current DB power_state: 1, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:33:57 compute-0 podman[253186]: 2026-01-05 21:33:57.850685295 +0000 UTC m=+0.065378113 container create c720fd2c2037f748c4ca7597fa98e4c92e71667b039ca594de55619db3e5c73e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:33:57 compute-0 systemd[1]: Started libpod-conmon-c720fd2c2037f748c4ca7597fa98e4c92e71667b039ca594de55619db3e5c73e.scope.
Jan 05 21:33:57 compute-0 podman[253186]: 2026-01-05 21:33:57.813570087 +0000 UTC m=+0.028262925 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 21:33:57 compute-0 nova_compute[186018]: 2026-01-05 21:33:57.928 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:33:57 compute-0 systemd[1]: Started libcrun container.
Jan 05 21:33:57 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/6da627d4907201b9c9732670d27136423a88efeb256586395208d4398af68dd8/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 21:33:57 compute-0 podman[253186]: 2026-01-05 21:33:57.977067222 +0000 UTC m=+0.191760060 container init c720fd2c2037f748c4ca7597fa98e4c92e71667b039ca594de55619db3e5c73e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:33:57 compute-0 podman[253186]: 2026-01-05 21:33:57.984477787 +0000 UTC m=+0.199170605 container start c720fd2c2037f748c4ca7597fa98e4c92e71667b039ca594de55619db3e5c73e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:33:58 compute-0 neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0[253199]: [NOTICE]   (253203) : New worker (253205) forked
Jan 05 21:33:58 compute-0 neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0[253199]: [NOTICE]   (253203) : Loading success.
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.852 186022 DEBUG nova.compute.manager [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received event network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.853 186022 DEBUG oslo_concurrency.lockutils [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.853 186022 DEBUG oslo_concurrency.lockutils [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.853 186022 DEBUG oslo_concurrency.lockutils [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.853 186022 DEBUG nova.compute.manager [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] No waiting events found dispatching network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.853 186022 WARNING nova.compute.manager [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received unexpected event network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d for instance with vm_state active and task_state None.
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.854 186022 DEBUG nova.compute.manager [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received event network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.854 186022 DEBUG oslo_concurrency.lockutils [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.854 186022 DEBUG oslo_concurrency.lockutils [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.854 186022 DEBUG oslo_concurrency.lockutils [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.854 186022 DEBUG nova.compute.manager [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] No waiting events found dispatching network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.855 186022 WARNING nova.compute.manager [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received unexpected event network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d for instance with vm_state active and task_state None.
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.855 186022 DEBUG nova.compute.manager [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received event network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.855 186022 DEBUG oslo_concurrency.lockutils [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.855 186022 DEBUG oslo_concurrency.lockutils [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.855 186022 DEBUG oslo_concurrency.lockutils [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.855 186022 DEBUG nova.compute.manager [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] No waiting events found dispatching network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:33:58 compute-0 nova_compute[186018]: 2026-01-05 21:33:58.855 186022 WARNING nova.compute.manager [req-35f56718-4c75-47c5-805d-39144e09e4e3 req-f9066daf-c266-4aa5-9f6c-7ec8307300b9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received unexpected event network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d for instance with vm_state active and task_state None.
Jan 05 21:33:59 compute-0 podman[202426]: time="2026-01-05T21:33:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:33:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:33:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32206 "" "Go-http-client/1.1"
Jan 05 21:33:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:33:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5758 "" "Go-http-client/1.1"
Jan 05 21:34:00 compute-0 ovn_controller[98229]: 2026-01-05T21:34:00Z|00016|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:9e:4e:dc 10.100.0.6
Jan 05 21:34:00 compute-0 ovn_controller[98229]: 2026-01-05T21:34:00Z|00017|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:9e:4e:dc 10.100.0.6
Jan 05 21:34:00 compute-0 podman[253223]: 2026-01-05 21:34:00.742273824 +0000 UTC m=+0.088122291 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:34:01 compute-0 openstack_network_exporter[205720]: ERROR   21:34:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:34:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:34:01 compute-0 openstack_network_exporter[205720]: ERROR   21:34:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:34:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:34:01 compute-0 anacron[229436]: Job `cron.daily' started
Jan 05 21:34:01 compute-0 nova_compute[186018]: 2026-01-05 21:34:01.870 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:01 compute-0 anacron[229436]: Job `cron.daily' terminated
Jan 05 21:34:02 compute-0 nova_compute[186018]: 2026-01-05 21:34:02.933 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:06 compute-0 nova_compute[186018]: 2026-01-05 21:34:06.875 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:07 compute-0 podman[253246]: 2026-01-05 21:34:07.005563354 +0000 UTC m=+0.091471600 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release=1214.1726694543, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, managed_by=edpm_ansible, name=ubi9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, build-date=2024-09-18T21:23:30, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., version=9.4, release-0.7.12=, maintainer=Red Hat, Inc., config_id=kepler, distribution-scope=public)
Jan 05 21:34:07 compute-0 podman[253247]: 2026-01-05 21:34:07.029882584 +0000 UTC m=+0.104898133 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 05 21:34:07 compute-0 nova_compute[186018]: 2026-01-05 21:34:07.941 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:10 compute-0 podman[253286]: 2026-01-05 21:34:10.732694132 +0000 UTC m=+0.085027690 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 21:34:10 compute-0 ovn_controller[98229]: 2026-01-05T21:34:10Z|00135|memory_trim|INFO|Detected inactivity (last active 30006 ms ago): trimming memory
Jan 05 21:34:11 compute-0 nova_compute[186018]: 2026-01-05 21:34:11.880 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:12 compute-0 nova_compute[186018]: 2026-01-05 21:34:12.945 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:15 compute-0 ovn_controller[98229]: 2026-01-05T21:34:15Z|00018|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:f6:00:12 10.100.0.203
Jan 05 21:34:15 compute-0 ovn_controller[98229]: 2026-01-05T21:34:15Z|00019|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:00:12 10.100.0.203
Jan 05 21:34:16 compute-0 nova_compute[186018]: 2026-01-05 21:34:16.886 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:17 compute-0 nova_compute[186018]: 2026-01-05 21:34:17.948 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:19 compute-0 podman[253320]: 2026-01-05 21:34:19.828087355 +0000 UTC m=+0.156933653 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, container_name=openstack_network_exporter, config_id=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, vcs-type=git, architecture=x86_64, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public)
Jan 05 21:34:19 compute-0 podman[253319]: 2026-01-05 21:34:19.849220061 +0000 UTC m=+0.184183740 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 05 21:34:21 compute-0 nova_compute[186018]: 2026-01-05 21:34:21.890 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:22 compute-0 nova_compute[186018]: 2026-01-05 21:34:22.952 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:23 compute-0 podman[253364]: 2026-01-05 21:34:23.761985096 +0000 UTC m=+0.109432162 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2)
Jan 05 21:34:23 compute-0 podman[253365]: 2026-01-05 21:34:23.783738289 +0000 UTC m=+0.119429225 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:34:25 compute-0 nova_compute[186018]: 2026-01-05 21:34:25.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:34:25 compute-0 nova_compute[186018]: 2026-01-05 21:34:25.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:34:26 compute-0 ovn_controller[98229]: 2026-01-05T21:34:26Z|00020|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:f6:93:1b 10.100.0.4
Jan 05 21:34:26 compute-0 nova_compute[186018]: 2026-01-05 21:34:26.617 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-1c4634a9-de38-4683-abb9-3964b285a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:34:26 compute-0 nova_compute[186018]: 2026-01-05 21:34:26.617 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-1c4634a9-de38-4683-abb9-3964b285a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:34:26 compute-0 nova_compute[186018]: 2026-01-05 21:34:26.617 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:34:26 compute-0 nova_compute[186018]: 2026-01-05 21:34:26.893 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:27 compute-0 nova_compute[186018]: 2026-01-05 21:34:27.955 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.633 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Updating instance_info_cache with network_info: [{"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.654 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-1c4634a9-de38-4683-abb9-3964b285a21c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.654 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.655 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.656 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.656 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.657 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.657 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.680 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.681 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.681 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.682 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:34:29 compute-0 podman[202426]: time="2026-01-05T21:34:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:34:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:34:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 32206 "" "Go-http-client/1.1"
Jan 05 21:34:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:34:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 5759 "" "Go-http-client/1.1"
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.803 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.893 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.909 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.970 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:34:29 compute-0 nova_compute[186018]: 2026-01-05 21:34:29.979 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:34:30 compute-0 nova_compute[186018]: 2026-01-05 21:34:30.038 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:34:30 compute-0 nova_compute[186018]: 2026-01-05 21:34:30.039 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:34:30 compute-0 nova_compute[186018]: 2026-01-05 21:34:30.143 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3/disk --force-share --output=json" returned: 0 in 0.104s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:34:30 compute-0 nova_compute[186018]: 2026-01-05 21:34:30.152 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:34:30 compute-0 nova_compute[186018]: 2026-01-05 21:34:30.225 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:34:30 compute-0 nova_compute[186018]: 2026-01-05 21:34:30.227 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:34:30 compute-0 nova_compute[186018]: 2026-01-05 21:34:30.285 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:34:30 compute-0 nova_compute[186018]: 2026-01-05 21:34:30.293 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:34:30 compute-0 nova_compute[186018]: 2026-01-05 21:34:30.355 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:34:30 compute-0 nova_compute[186018]: 2026-01-05 21:34:30.356 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:34:30 compute-0 nova_compute[186018]: 2026-01-05 21:34:30.444 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:34:30 compute-0 nova_compute[186018]: 2026-01-05 21:34:30.927 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:34:30 compute-0 nova_compute[186018]: 2026-01-05 21:34:30.929 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4636MB free_disk=72.22877883911133GB free_vcpus=4 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:34:30 compute-0 nova_compute[186018]: 2026-01-05 21:34:30.929 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:30 compute-0 nova_compute[186018]: 2026-01-05 21:34:30.930 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:31 compute-0 nova_compute[186018]: 2026-01-05 21:34:31.093 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:34:31 compute-0 nova_compute[186018]: 2026-01-05 21:34:31.094 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 1c4634a9-de38-4683-abb9-3964b285a21c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:34:31 compute-0 nova_compute[186018]: 2026-01-05 21:34:31.094 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 8123e49e-6aaf-4e97-9f0e-4039061d12d3 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:34:31 compute-0 nova_compute[186018]: 2026-01-05 21:34:31.094 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance fe15eddf-ceea-4584-95df-dc1ea54e3c25 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:34:31 compute-0 nova_compute[186018]: 2026-01-05 21:34:31.095 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 4 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:34:31 compute-0 nova_compute[186018]: 2026-01-05 21:34:31.095 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=1024MB phys_disk=79GB used_disk=4GB total_vcpus=8 used_vcpus=4 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:34:31 compute-0 nova_compute[186018]: 2026-01-05 21:34:31.194 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:34:31 compute-0 nova_compute[186018]: 2026-01-05 21:34:31.213 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:34:31 compute-0 nova_compute[186018]: 2026-01-05 21:34:31.240 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:34:31 compute-0 nova_compute[186018]: 2026-01-05 21:34:31.241 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.311s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:31 compute-0 openstack_network_exporter[205720]: ERROR   21:34:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:34:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:34:31 compute-0 openstack_network_exporter[205720]: ERROR   21:34:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:34:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:34:31 compute-0 podman[253438]: 2026-01-05 21:34:31.714427218 +0000 UTC m=+0.071042281 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:34:31 compute-0 nova_compute[186018]: 2026-01-05 21:34:31.895 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:32 compute-0 nova_compute[186018]: 2026-01-05 21:34:32.957 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:33 compute-0 nova_compute[186018]: 2026-01-05 21:34:33.047 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:34:33 compute-0 nova_compute[186018]: 2026-01-05 21:34:33.047 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:34:33 compute-0 nova_compute[186018]: 2026-01-05 21:34:33.073 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:34:33 compute-0 nova_compute[186018]: 2026-01-05 21:34:33.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:34:33 compute-0 nova_compute[186018]: 2026-01-05 21:34:33.866 186022 DEBUG oslo_concurrency.lockutils [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Acquiring lock "1c4634a9-de38-4683-abb9-3964b285a21c" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:33 compute-0 nova_compute[186018]: 2026-01-05 21:34:33.867 186022 DEBUG oslo_concurrency.lockutils [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:33 compute-0 nova_compute[186018]: 2026-01-05 21:34:33.868 186022 DEBUG oslo_concurrency.lockutils [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Acquiring lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:33 compute-0 nova_compute[186018]: 2026-01-05 21:34:33.868 186022 DEBUG oslo_concurrency.lockutils [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:33 compute-0 nova_compute[186018]: 2026-01-05 21:34:33.869 186022 DEBUG oslo_concurrency.lockutils [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:33 compute-0 nova_compute[186018]: 2026-01-05 21:34:33.871 186022 INFO nova.compute.manager [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Terminating instance
Jan 05 21:34:33 compute-0 nova_compute[186018]: 2026-01-05 21:34:33.873 186022 DEBUG nova.compute.manager [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 21:34:33 compute-0 kernel: tapcecba75e-30 (unregistering): left promiscuous mode
Jan 05 21:34:33 compute-0 NetworkManager[56598]: <info>  [1767648873.9158] device (tapcecba75e-30): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 21:34:33 compute-0 ovn_controller[98229]: 2026-01-05T21:34:33Z|00136|binding|INFO|Releasing lport cecba75e-30de-46e3-9539-c1911e784f2d from this chassis (sb_readonly=0)
Jan 05 21:34:33 compute-0 ovn_controller[98229]: 2026-01-05T21:34:33Z|00137|binding|INFO|Setting lport cecba75e-30de-46e3-9539-c1911e784f2d down in Southbound
Jan 05 21:34:33 compute-0 ovn_controller[98229]: 2026-01-05T21:34:33Z|00138|binding|INFO|Removing iface tapcecba75e-30 ovn-installed in OVS
Jan 05 21:34:33 compute-0 nova_compute[186018]: 2026-01-05 21:34:33.938 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:33 compute-0 nova_compute[186018]: 2026-01-05 21:34:33.949 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:33 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:33.952 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:93:1b 10.100.0.4'], port_security=['fa:16:3e:f6:93:1b 10.100.0.4'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.4/28', 'neutron:device_id': '1c4634a9-de38-4683-abb9-3964b285a21c', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-9d140934-6988-43f2-b45f-49e5cf3de4b0', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5efd2bd3d0424bd99bd88ac5bfe7d457', 'neutron:revision_number': '6', 'neutron:security_group_ids': '842e8104-5a29-4d14-99fa-0f1362c35beb', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:port_fip': '192.168.122.233', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=4dc7cb32-4733-47ef-890a-22095c3cd6b2, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=cecba75e-30de-46e3-9539-c1911e784f2d) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:34:33 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:33.954 107689 INFO neutron.agent.ovn.metadata.agent [-] Port cecba75e-30de-46e3-9539-c1911e784f2d in datapath 9d140934-6988-43f2-b45f-49e5cf3de4b0 unbound from our chassis
Jan 05 21:34:33 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:33.957 107689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 9d140934-6988-43f2-b45f-49e5cf3de4b0, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 05 21:34:33 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:33.961 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[748d91e4-9d18-47e8-aaa0-e21b15ec9b20]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:34:33 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:33.962 107689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0 namespace which is not needed anymore
Jan 05 21:34:33 compute-0 nova_compute[186018]: 2026-01-05 21:34:33.965 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:33 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000009.scope: Deactivated successfully.
Jan 05 21:34:34 compute-0 systemd[1]: machine-qemu\x2d12\x2dinstance\x2d00000009.scope: Consumed 33.659s CPU time.
Jan 05 21:34:34 compute-0 systemd-machined[157312]: Machine qemu-12-instance-00000009 terminated.
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.140 186022 INFO nova.virt.libvirt.driver [-] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Instance destroyed successfully.
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.140 186022 DEBUG nova.objects.instance [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lazy-loading 'resources' on Instance uuid 1c4634a9-de38-4683-abb9-3964b285a21c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:34:34 compute-0 neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0[253199]: [NOTICE]   (253203) : haproxy version is 2.8.14-c23fe91
Jan 05 21:34:34 compute-0 neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0[253199]: [NOTICE]   (253203) : path to executable is /usr/sbin/haproxy
Jan 05 21:34:34 compute-0 neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0[253199]: [WARNING]  (253203) : Exiting Master process...
Jan 05 21:34:34 compute-0 neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0[253199]: [ALERT]    (253203) : Current worker (253205) exited with code 143 (Terminated)
Jan 05 21:34:34 compute-0 neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0[253199]: [WARNING]  (253203) : All workers exited. Exiting... (0)
Jan 05 21:34:34 compute-0 systemd[1]: libpod-c720fd2c2037f748c4ca7597fa98e4c92e71667b039ca594de55619db3e5c73e.scope: Deactivated successfully.
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.156 186022 DEBUG nova.virt.libvirt.vif [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:32:35Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerActionsTestJSON-server-1019046137',display_name='tempest-ServerActionsTestJSON-server-1019046137',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveractionstestjson-server-1019046137',id=9,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBCdsX/VW/otw2+baeo241R2QhmkVDaN24udXgw5ga/G5VloNjKs7iKGi9GFFfjKokOQxQ2hPiWL3KkIRK7GQwJhLRoUKXUhkfvs1aUx6Mef7xFXtmjR0ROHB22gCQ/YkTw==',key_name='tempest-keypair-962693419',keypairs=<?>,launch_index=0,launched_at=2026-01-05T21:32:46Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5efd2bd3d0424bd99bd88ac5bfe7d457',ramdisk_id='',reservation_id='r-y4vmxuzn',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerActionsTestJSON-578788577',owner_user_name='tempest-ServerActionsTestJSON-578788577-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T21:33:57Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='7c73fe2d06da4c34ab29da3c61a0989e',uuid=1c4634a9-de38-4683-abb9-3964b285a21c,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.156 186022 DEBUG nova.network.os_vif_util [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Converting VIF {"id": "cecba75e-30de-46e3-9539-c1911e784f2d", "address": "fa:16:3e:f6:93:1b", "network": {"id": "9d140934-6988-43f2-b45f-49e5cf3de4b0", "bridge": "br-int", "label": "tempest-ServerActionsTestJSON-2029168979-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.4", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.233", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5efd2bd3d0424bd99bd88ac5bfe7d457", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapcecba75e-30", "ovs_interfaceid": "cecba75e-30de-46e3-9539-c1911e784f2d", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.157 186022 DEBUG nova.network.os_vif_util [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:93:1b,bridge_name='br-int',has_traffic_filtering=True,id=cecba75e-30de-46e3-9539-c1911e784f2d,network=Network(9d140934-6988-43f2-b45f-49e5cf3de4b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecba75e-30') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.157 186022 DEBUG os_vif [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:93:1b,bridge_name='br-int',has_traffic_filtering=True,id=cecba75e-30de-46e3-9539-c1911e784f2d,network=Network(9d140934-6988-43f2-b45f-49e5cf3de4b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecba75e-30') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.160 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.160 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcecba75e-30, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.162 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.165 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:34:34 compute-0 podman[253484]: 2026-01-05 21:34:34.167524483 +0000 UTC m=+0.071481533 container died c720fd2c2037f748c4ca7597fa98e4c92e71667b039ca594de55619db3e5c73e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, org.label-schema.vendor=CentOS)
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.167 186022 INFO os_vif [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:93:1b,bridge_name='br-int',has_traffic_filtering=True,id=cecba75e-30de-46e3-9539-c1911e784f2d,network=Network(9d140934-6988-43f2-b45f-49e5cf3de4b0),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapcecba75e-30')
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.168 186022 INFO nova.virt.libvirt.driver [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Deleting instance files /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c_del
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.169 186022 INFO nova.virt.libvirt.driver [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Deletion of /var/lib/nova/instances/1c4634a9-de38-4683-abb9-3964b285a21c_del complete
Jan 05 21:34:34 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-c720fd2c2037f748c4ca7597fa98e4c92e71667b039ca594de55619db3e5c73e-userdata-shm.mount: Deactivated successfully.
Jan 05 21:34:34 compute-0 systemd[1]: var-lib-containers-storage-overlay-6da627d4907201b9c9732670d27136423a88efeb256586395208d4398af68dd8-merged.mount: Deactivated successfully.
Jan 05 21:34:34 compute-0 podman[253484]: 2026-01-05 21:34:34.218109935 +0000 UTC m=+0.122066985 container cleanup c720fd2c2037f748c4ca7597fa98e4c92e71667b039ca594de55619db3e5c73e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.231 186022 INFO nova.compute.manager [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Took 0.36 seconds to destroy the instance on the hypervisor.
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.231 186022 DEBUG oslo.service.loopingcall [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.231 186022 DEBUG nova.compute.manager [-] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.232 186022 DEBUG nova.network.neutron [-] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 21:34:34 compute-0 systemd[1]: libpod-conmon-c720fd2c2037f748c4ca7597fa98e4c92e71667b039ca594de55619db3e5c73e.scope: Deactivated successfully.
Jan 05 21:34:34 compute-0 podman[253528]: 2026-01-05 21:34:34.313184548 +0000 UTC m=+0.060585466 container remove c720fd2c2037f748c4ca7597fa98e4c92e71667b039ca594de55619db3e5c73e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 05 21:34:34 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:34.322 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[39caa4e5-d576-4ebc-a50b-f2b2947022b4]: (4, ('Mon Jan  5 09:34:34 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0 (c720fd2c2037f748c4ca7597fa98e4c92e71667b039ca594de55619db3e5c73e)\nc720fd2c2037f748c4ca7597fa98e4c92e71667b039ca594de55619db3e5c73e\nMon Jan  5 09:34:34 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0 (c720fd2c2037f748c4ca7597fa98e4c92e71667b039ca594de55619db3e5c73e)\nc720fd2c2037f748c4ca7597fa98e4c92e71667b039ca594de55619db3e5c73e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:34:34 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:34.324 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[b0642789-5379-4bd9-bc1d-08e72bf6c228]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:34:34 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:34.325 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap9d140934-60, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.327 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:34 compute-0 kernel: tap9d140934-60: left promiscuous mode
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.342 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.344 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:34 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:34.348 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[9b38740f-eb77-4c80-8eb5-4991878d57f4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:34:34 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:34.363 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[9be9c93e-13f5-45d6-9f0b-b7adc9252145]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:34:34 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:34.365 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[05b32c4b-ebdb-4ed0-86d1-21e9f4f22d5c]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:34:34 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:34.381 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[78a90557-8612-4d26-a9a7-700f95c1dbd3]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 557780, 'reachable_time': 37678, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253543, 'error': None, 'target': 'ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:34:34 compute-0 systemd[1]: run-netns-ovnmeta\x2d9d140934\x2d6988\x2d43f2\x2db45f\x2d49e5cf3de4b0.mount: Deactivated successfully.
Jan 05 21:34:34 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:34.387 108136 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-9d140934-6988-43f2-b45f-49e5cf3de4b0 deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 05 21:34:34 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:34.387 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[6dc03233-d6e7-42f1-bedf-72dee3c5b7ef]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.691 186022 DEBUG nova.compute.manager [req-b522f24d-4c7e-4fde-9da1-44ff93b1c383 req-764b17c2-0067-4b1e-90a7-897bc2121927 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received event network-vif-unplugged-cecba75e-30de-46e3-9539-c1911e784f2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.691 186022 DEBUG oslo_concurrency.lockutils [req-b522f24d-4c7e-4fde-9da1-44ff93b1c383 req-764b17c2-0067-4b1e-90a7-897bc2121927 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.691 186022 DEBUG oslo_concurrency.lockutils [req-b522f24d-4c7e-4fde-9da1-44ff93b1c383 req-764b17c2-0067-4b1e-90a7-897bc2121927 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.692 186022 DEBUG oslo_concurrency.lockutils [req-b522f24d-4c7e-4fde-9da1-44ff93b1c383 req-764b17c2-0067-4b1e-90a7-897bc2121927 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.692 186022 DEBUG nova.compute.manager [req-b522f24d-4c7e-4fde-9da1-44ff93b1c383 req-764b17c2-0067-4b1e-90a7-897bc2121927 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] No waiting events found dispatching network-vif-unplugged-cecba75e-30de-46e3-9539-c1911e784f2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.692 186022 DEBUG nova.compute.manager [req-b522f24d-4c7e-4fde-9da1-44ff93b1c383 req-764b17c2-0067-4b1e-90a7-897bc2121927 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received event network-vif-unplugged-cecba75e-30de-46e3-9539-c1911e784f2d for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 05 21:34:34 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:34.715 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=15, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=14) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:34:34 compute-0 nova_compute[186018]: 2026-01-05 21:34:34.715 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:34 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:34.716 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 10 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:34:35 compute-0 nova_compute[186018]: 2026-01-05 21:34:35.121 186022 DEBUG nova.network.neutron [-] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:34:35 compute-0 nova_compute[186018]: 2026-01-05 21:34:35.145 186022 INFO nova.compute.manager [-] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Took 0.91 seconds to deallocate network for instance.
Jan 05 21:34:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:35.177 108054 DEBUG eventlet.wsgi.server [-] (108054) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 05 21:34:35 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:35.178 108054 DEBUG neutron.agent.ovn.metadata.server [-] Request: GET /latest/meta-data/public-ipv4 HTTP/1.0
Jan 05 21:34:35 compute-0 ovn_metadata_agent[107684]: Accept: */*
Jan 05 21:34:35 compute-0 ovn_metadata_agent[107684]: Connection: close
Jan 05 21:34:35 compute-0 ovn_metadata_agent[107684]: Content-Type: text/plain
Jan 05 21:34:35 compute-0 ovn_metadata_agent[107684]: Host: 169.254.169.254
Jan 05 21:34:35 compute-0 ovn_metadata_agent[107684]: User-Agent: curl/7.84.0
Jan 05 21:34:35 compute-0 ovn_metadata_agent[107684]: X-Forwarded-For: 10.100.0.6
Jan 05 21:34:35 compute-0 ovn_metadata_agent[107684]: X-Ovn-Network-Id: aae0d8ab-f4c2-45a3-98ea-6057c14a083d __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 05 21:34:35 compute-0 nova_compute[186018]: 2026-01-05 21:34:35.194 186022 DEBUG oslo_concurrency.lockutils [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:35 compute-0 nova_compute[186018]: 2026-01-05 21:34:35.194 186022 DEBUG oslo_concurrency.lockutils [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:35 compute-0 nova_compute[186018]: 2026-01-05 21:34:35.464 186022 DEBUG nova.compute.manager [req-07fac69c-6812-442e-ae41-751bd86196c7 req-f9d5c720-d057-479e-ae0b-7591e887370e 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received event network-vif-deleted-cecba75e-30de-46e3-9539-c1911e784f2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:34:35 compute-0 nova_compute[186018]: 2026-01-05 21:34:35.505 186022 DEBUG nova.compute.provider_tree [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:34:35 compute-0 nova_compute[186018]: 2026-01-05 21:34:35.527 186022 DEBUG nova.scheduler.client.report [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:34:35 compute-0 nova_compute[186018]: 2026-01-05 21:34:35.547 186022 DEBUG oslo_concurrency.lockutils [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.353s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:35 compute-0 nova_compute[186018]: 2026-01-05 21:34:35.580 186022 INFO nova.scheduler.client.report [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Deleted allocations for instance 1c4634a9-de38-4683-abb9-3964b285a21c
Jan 05 21:34:35 compute-0 nova_compute[186018]: 2026-01-05 21:34:35.674 186022 DEBUG oslo_concurrency.lockutils [None req-a8441eb6-cf00-445a-a4b4-a3407ae72c7f 7c73fe2d06da4c34ab29da3c61a0989e 5efd2bd3d0424bd99bd88ac5bfe7d457 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.807s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:36 compute-0 nova_compute[186018]: 2026-01-05 21:34:36.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:34:36 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:36.632 108054 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 05 21:34:36 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:36.632 108054 INFO eventlet.wsgi.server [-] 10.100.0.6,<local> "GET /latest/meta-data/public-ipv4 HTTP/1.1" status: 200  len: 151 time: 1.4541426
Jan 05 21:34:36 compute-0 haproxy-metadata-proxy-aae0d8ab-f4c2-45a3-98ea-6057c14a083d[252655]: 10.100.0.6:42758 [05/Jan/2026:21:34:35.176] listener listener/metadata 0/0/0/1457/1457 200 135 - - ---- 1/1/0/0/0 0/0 "GET /latest/meta-data/public-ipv4 HTTP/1.1"
Jan 05 21:34:36 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:36.757 108054 DEBUG eventlet.wsgi.server [-] (108054) accepted '' server /usr/lib/python3.9/site-packages/eventlet/wsgi.py:1004
Jan 05 21:34:36 compute-0 nova_compute[186018]: 2026-01-05 21:34:36.760 186022 DEBUG nova.compute.manager [req-bb00eeeb-9cb9-4ae1-b9a0-2c676123a155 req-6aee44a7-b477-4ec1-8fcc-cd80e58dfbb2 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received event network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:34:36 compute-0 nova_compute[186018]: 2026-01-05 21:34:36.761 186022 DEBUG oslo_concurrency.lockutils [req-bb00eeeb-9cb9-4ae1-b9a0-2c676123a155 req-6aee44a7-b477-4ec1-8fcc-cd80e58dfbb2 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:36 compute-0 nova_compute[186018]: 2026-01-05 21:34:36.761 186022 DEBUG oslo_concurrency.lockutils [req-bb00eeeb-9cb9-4ae1-b9a0-2c676123a155 req-6aee44a7-b477-4ec1-8fcc-cd80e58dfbb2 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:36 compute-0 nova_compute[186018]: 2026-01-05 21:34:36.761 186022 DEBUG oslo_concurrency.lockutils [req-bb00eeeb-9cb9-4ae1-b9a0-2c676123a155 req-6aee44a7-b477-4ec1-8fcc-cd80e58dfbb2 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "1c4634a9-de38-4683-abb9-3964b285a21c-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:36 compute-0 nova_compute[186018]: 2026-01-05 21:34:36.761 186022 DEBUG nova.compute.manager [req-bb00eeeb-9cb9-4ae1-b9a0-2c676123a155 req-6aee44a7-b477-4ec1-8fcc-cd80e58dfbb2 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] No waiting events found dispatching network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:34:36 compute-0 nova_compute[186018]: 2026-01-05 21:34:36.761 186022 WARNING nova.compute.manager [req-bb00eeeb-9cb9-4ae1-b9a0-2c676123a155 req-6aee44a7-b477-4ec1-8fcc-cd80e58dfbb2 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Received unexpected event network-vif-plugged-cecba75e-30de-46e3-9539-c1911e784f2d for instance with vm_state deleted and task_state None.
Jan 05 21:34:36 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:36.770 108054 DEBUG neutron.agent.ovn.metadata.server [-] Request: POST /openstack/2013-10-17/password HTTP/1.0
Jan 05 21:34:36 compute-0 ovn_metadata_agent[107684]: Accept: */*
Jan 05 21:34:36 compute-0 ovn_metadata_agent[107684]: Connection: close
Jan 05 21:34:36 compute-0 ovn_metadata_agent[107684]: Content-Length: 100
Jan 05 21:34:36 compute-0 ovn_metadata_agent[107684]: Content-Type: application/x-www-form-urlencoded
Jan 05 21:34:36 compute-0 ovn_metadata_agent[107684]: Host: 169.254.169.254
Jan 05 21:34:36 compute-0 ovn_metadata_agent[107684]: User-Agent: curl/7.84.0
Jan 05 21:34:36 compute-0 ovn_metadata_agent[107684]: X-Forwarded-For: 10.100.0.6
Jan 05 21:34:36 compute-0 ovn_metadata_agent[107684]: X-Ovn-Network-Id: aae0d8ab-f4c2-45a3-98ea-6057c14a083d
Jan 05 21:34:36 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:34:36 compute-0 ovn_metadata_agent[107684]: testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest __call__ /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:82
Jan 05 21:34:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:37.028 108054 DEBUG neutron.agent.ovn.metadata.server [-] <Response [200]> _proxy_request /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/server.py:161
Jan 05 21:34:37 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:37.028 108054 INFO eventlet.wsgi.server [-] 10.100.0.6,<local> "POST /openstack/2013-10-17/password HTTP/1.1" status: 200  len: 134 time: 0.2581651
Jan 05 21:34:37 compute-0 haproxy-metadata-proxy-aae0d8ab-f4c2-45a3-98ea-6057c14a083d[252655]: 10.100.0.6:42768 [05/Jan/2026:21:34:36.756] listener listener/metadata 0/0/0/272/272 200 118 - - ---- 1/1/0/0/0 0/0 "POST /openstack/2013-10-17/password HTTP/1.1"
Jan 05 21:34:37 compute-0 podman[253544]: 2026-01-05 21:34:37.736476657 +0000 UTC m=+0.083989233 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, release=1214.1726694543, vcs-type=git, distribution-scope=public, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, container_name=kepler, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., io.openshift.expose-services=, release-0.7.12=)
Jan 05 21:34:37 compute-0 podman[253545]: 2026-01-05 21:34:37.773173813 +0000 UTC m=+0.117835524 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 21:34:37 compute-0 nova_compute[186018]: 2026-01-05 21:34:37.960 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.162 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.217 186022 DEBUG oslo_concurrency.lockutils [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Acquiring lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.218 186022 DEBUG oslo_concurrency.lockutils [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.218 186022 DEBUG oslo_concurrency.lockutils [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Acquiring lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.219 186022 DEBUG oslo_concurrency.lockutils [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.220 186022 DEBUG oslo_concurrency.lockutils [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.222 186022 INFO nova.compute.manager [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Terminating instance
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.223 186022 DEBUG nova.compute.manager [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 21:34:39 compute-0 kernel: tap8a773115-5c (unregistering): left promiscuous mode
Jan 05 21:34:39 compute-0 NetworkManager[56598]: <info>  [1767648879.2543] device (tap8a773115-5c): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.263 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:39 compute-0 ovn_controller[98229]: 2026-01-05T21:34:39Z|00139|binding|INFO|Releasing lport 8a773115-5cfe-4366-97f0-643e66599184 from this chassis (sb_readonly=0)
Jan 05 21:34:39 compute-0 ovn_controller[98229]: 2026-01-05T21:34:39Z|00140|binding|INFO|Setting lport 8a773115-5cfe-4366-97f0-643e66599184 down in Southbound
Jan 05 21:34:39 compute-0 ovn_controller[98229]: 2026-01-05T21:34:39Z|00141|binding|INFO|Removing iface tap8a773115-5c ovn-installed in OVS
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.267 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:39 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:39.272 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:9e:4e:dc 10.100.0.6'], port_security=['fa:16:3e:9e:4e:dc 10.100.0.6'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.6/28', 'neutron:device_id': '8123e49e-6aaf-4e97-9f0e-4039061d12d3', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-aae0d8ab-f4c2-45a3-98ea-6057c14a083d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': 'f530e5001be644ada25ea22d2fc918bb', 'neutron:revision_number': '4', 'neutron:security_group_ids': '095e9468-180c-4738-8a72-aee138b2c523 2c4e81e4-d89a-4021-a6bc-8babb492b41e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com', 'neutron:port_fip': '192.168.122.179'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=a181296e-c1b7-4d0e-85b2-ec2adaea4841, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=8a773115-5cfe-4366-97f0-643e66599184) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:34:39 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:39.273 107689 INFO neutron.agent.ovn.metadata.agent [-] Port 8a773115-5cfe-4366-97f0-643e66599184 in datapath aae0d8ab-f4c2-45a3-98ea-6057c14a083d unbound from our chassis
Jan 05 21:34:39 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:39.275 107689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network aae0d8ab-f4c2-45a3-98ea-6057c14a083d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 05 21:34:39 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:39.277 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[08b75f34-3457-4628-8150-1b04eaec8aa0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:34:39 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:39.278 107689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d namespace which is not needed anymore
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.282 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:39 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Deactivated successfully.
Jan 05 21:34:39 compute-0 systemd[1]: machine-qemu\x2d10\x2dinstance\x2d0000000a.scope: Consumed 40.403s CPU time.
Jan 05 21:34:39 compute-0 systemd-machined[157312]: Machine qemu-10-instance-0000000a terminated.
Jan 05 21:34:39 compute-0 neutron-haproxy-ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d[252649]: [NOTICE]   (252653) : haproxy version is 2.8.14-c23fe91
Jan 05 21:34:39 compute-0 neutron-haproxy-ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d[252649]: [NOTICE]   (252653) : path to executable is /usr/sbin/haproxy
Jan 05 21:34:39 compute-0 neutron-haproxy-ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d[252649]: [WARNING]  (252653) : Exiting Master process...
Jan 05 21:34:39 compute-0 neutron-haproxy-ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d[252649]: [WARNING]  (252653) : Exiting Master process...
Jan 05 21:34:39 compute-0 neutron-haproxy-ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d[252649]: [ALERT]    (252653) : Current worker (252655) exited with code 143 (Terminated)
Jan 05 21:34:39 compute-0 neutron-haproxy-ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d[252649]: [WARNING]  (252653) : All workers exited. Exiting... (0)
Jan 05 21:34:39 compute-0 systemd[1]: libpod-bf524ce80e70ecfc5ade65ddabdfc1fa6f26a101faecdff95d71518e904e1717.scope: Deactivated successfully.
Jan 05 21:34:39 compute-0 conmon[252649]: conmon bf524ce80e70ecfc5ade <nwarn>: Failed to open cgroups file: /sys/fs/cgroup/machine.slice/libpod-bf524ce80e70ecfc5ade65ddabdfc1fa6f26a101faecdff95d71518e904e1717.scope/container/memory.events
Jan 05 21:34:39 compute-0 podman[253607]: 2026-01-05 21:34:39.463755702 +0000 UTC m=+0.067768615 container died bf524ce80e70ecfc5ade65ddabdfc1fa6f26a101faecdff95d71518e904e1717 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.494 186022 INFO nova.virt.libvirt.driver [-] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Instance destroyed successfully.
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.496 186022 DEBUG nova.objects.instance [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lazy-loading 'resources' on Instance uuid 8123e49e-6aaf-4e97-9f0e-4039061d12d3 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:34:39 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-bf524ce80e70ecfc5ade65ddabdfc1fa6f26a101faecdff95d71518e904e1717-userdata-shm.mount: Deactivated successfully.
Jan 05 21:34:39 compute-0 systemd[1]: var-lib-containers-storage-overlay-7f7674c786b5ca6255937aa4e61e1ab6f5d31a3d3f19460eaf013802073e540d-merged.mount: Deactivated successfully.
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.512 186022 DEBUG nova.virt.libvirt.vif [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:33:15Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-TestServerBasicOps-server-1976684734',display_name='tempest-TestServerBasicOps-server-1976684734',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-testserverbasicops-server-1976684734',id=10,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data='ecdsa-sha2-nistp384 AAAAE2VjZHNhLXNoYTItbmlzdHAzODQAAAAIbmlzdHAzODQAAABhBHH2qHQTdCMojQaAboVuJHZOo3UWBhUhPK+SxvS8rEHWVcJB4wATMh3Lnn5L4KoBVF1RMoE6cX5F41gAxeArXKiTxZK88pNt76pU5XoY2zaRV8Be3zK8C5dt0ZeQ3UH4eg==',key_name='tempest-TestServerBasicOps-1738816453',keypairs=<?>,launch_index=0,launched_at=2026-01-05T21:33:27Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={meta1='data1',meta2='data2',metaN='dataN'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='f530e5001be644ada25ea22d2fc918bb',ramdisk_id='',reservation_id='r-ljq0ps0a',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='reader,member',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-TestServerBasicOps-273363449',owner_user_name='tempest-TestServerBasicOps-273363449-project-member',password_0='testtesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttesttest',password_1='',password_2='',password_3=''},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T21:34:36Z,user_data='IyEvYmluL3NoCmVjaG8gIlByaW50aW5nIGNpcnJvcyB1c2VyIGF1dGhvcml6ZWQga2V5cyIKY2F0IH5jaXJyb3MvLnNzaC9hdXRob3JpemVkX2tleXMgfHwgdHJ1ZQo=',user_id='1b776719a870485db8e8ec3697bac537',uuid=8123e49e-6aaf-4e97-9f0e-4039061d12d3,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "8a773115-5cfe-4366-97f0-643e66599184", "address": "fa:16:3e:9e:4e:dc", "network": {"id": "aae0d8ab-f4c2-45a3-98ea-6057c14a083d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1267765400-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f530e5001be644ada25ea22d2fc918bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a773115-5c", "ovs_interfaceid": "8a773115-5cfe-4366-97f0-643e66599184", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.513 186022 DEBUG nova.network.os_vif_util [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Converting VIF {"id": "8a773115-5cfe-4366-97f0-643e66599184", "address": "fa:16:3e:9e:4e:dc", "network": {"id": "aae0d8ab-f4c2-45a3-98ea-6057c14a083d", "bridge": "br-int", "label": "tempest-TestServerBasicOps-1267765400-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.179", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "f530e5001be644ada25ea22d2fc918bb", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap8a773115-5c", "ovs_interfaceid": "8a773115-5cfe-4366-97f0-643e66599184", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.514 186022 DEBUG nova.network.os_vif_util [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:9e:4e:dc,bridge_name='br-int',has_traffic_filtering=True,id=8a773115-5cfe-4366-97f0-643e66599184,network=Network(aae0d8ab-f4c2-45a3-98ea-6057c14a083d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a773115-5c') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.514 186022 DEBUG os_vif [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:4e:dc,bridge_name='br-int',has_traffic_filtering=True,id=8a773115-5cfe-4366-97f0-643e66599184,network=Network(aae0d8ab-f4c2-45a3-98ea-6057c14a083d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a773115-5c') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.516 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.517 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap8a773115-5c, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:34:39 compute-0 podman[253607]: 2026-01-05 21:34:39.518844022 +0000 UTC m=+0.122856915 container cleanup bf524ce80e70ecfc5ade65ddabdfc1fa6f26a101faecdff95d71518e904e1717 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.520 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.523 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.526 186022 INFO os_vif [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:9e:4e:dc,bridge_name='br-int',has_traffic_filtering=True,id=8a773115-5cfe-4366-97f0-643e66599184,network=Network(aae0d8ab-f4c2-45a3-98ea-6057c14a083d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap8a773115-5c')
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.527 186022 INFO nova.virt.libvirt.driver [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Deleting instance files /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3_del
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.528 186022 INFO nova.virt.libvirt.driver [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Deletion of /var/lib/nova/instances/8123e49e-6aaf-4e97-9f0e-4039061d12d3_del complete
Jan 05 21:34:39 compute-0 systemd[1]: libpod-conmon-bf524ce80e70ecfc5ade65ddabdfc1fa6f26a101faecdff95d71518e904e1717.scope: Deactivated successfully.
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.579 186022 INFO nova.compute.manager [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Took 0.35 seconds to destroy the instance on the hypervisor.
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.580 186022 DEBUG oslo.service.loopingcall [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.580 186022 DEBUG nova.compute.manager [-] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.581 186022 DEBUG nova.network.neutron [-] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 21:34:39 compute-0 podman[253653]: 2026-01-05 21:34:39.614442499 +0000 UTC m=+0.061681445 container remove bf524ce80e70ecfc5ade65ddabdfc1fa6f26a101faecdff95d71518e904e1717 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS)
Jan 05 21:34:39 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:39.623 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[45232b09-1b89-4fb5-a1f9-3ce5b432d654]: (4, ('Mon Jan  5 09:34:39 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d (bf524ce80e70ecfc5ade65ddabdfc1fa6f26a101faecdff95d71518e904e1717)\nbf524ce80e70ecfc5ade65ddabdfc1fa6f26a101faecdff95d71518e904e1717\nMon Jan  5 09:34:39 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d (bf524ce80e70ecfc5ade65ddabdfc1fa6f26a101faecdff95d71518e904e1717)\nbf524ce80e70ecfc5ade65ddabdfc1fa6f26a101faecdff95d71518e904e1717\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:34:39 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:39.625 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[efb3b1b4-97a9-46cc-b9df-4e87c60cf391]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:34:39 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:39.626 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapaae0d8ab-f0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.628 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.641 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:39 compute-0 kernel: tapaae0d8ab-f0: left promiscuous mode
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.648 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:39 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:39.651 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[f995c45b-8b10-426f-bee2-517acead50fc]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:34:39 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:39.672 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[689117a3-163c-4c1c-bb67-ed69c1859fcd]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:34:39 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:39.674 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[f0b7bab9-83cb-477e-a33a-216e9d4cac5e]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:34:39 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:39.691 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[4805db03-17bf-426a-80ec-60a70f544778]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 554341, 'reachable_time': 15896, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253668, 'error': None, 'target': 'ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:34:39 compute-0 systemd[1]: run-netns-ovnmeta\x2daae0d8ab\x2df4c2\x2d45a3\x2d98ea\x2d6057c14a083d.mount: Deactivated successfully.
Jan 05 21:34:39 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:39.697 108136 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-aae0d8ab-f4c2-45a3-98ea-6057c14a083d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 05 21:34:39 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:39.697 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[27634aef-4292-4dc5-991a-afbc32a809d0]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.917 186022 DEBUG nova.compute.manager [req-3adfebf0-78e3-4746-a651-c6f2110ac449 req-581f03a8-a382-485f-bdca-903e07b45fc9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Received event network-vif-unplugged-8a773115-5cfe-4366-97f0-643e66599184 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.918 186022 DEBUG oslo_concurrency.lockutils [req-3adfebf0-78e3-4746-a651-c6f2110ac449 req-581f03a8-a382-485f-bdca-903e07b45fc9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.919 186022 DEBUG oslo_concurrency.lockutils [req-3adfebf0-78e3-4746-a651-c6f2110ac449 req-581f03a8-a382-485f-bdca-903e07b45fc9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.920 186022 DEBUG oslo_concurrency.lockutils [req-3adfebf0-78e3-4746-a651-c6f2110ac449 req-581f03a8-a382-485f-bdca-903e07b45fc9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.921 186022 DEBUG nova.compute.manager [req-3adfebf0-78e3-4746-a651-c6f2110ac449 req-581f03a8-a382-485f-bdca-903e07b45fc9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] No waiting events found dispatching network-vif-unplugged-8a773115-5cfe-4366-97f0-643e66599184 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:34:39 compute-0 nova_compute[186018]: 2026-01-05 21:34:39.922 186022 DEBUG nova.compute.manager [req-3adfebf0-78e3-4746-a651-c6f2110ac449 req-581f03a8-a382-485f-bdca-903e07b45fc9 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Received event network-vif-unplugged-8a773115-5cfe-4366-97f0-643e66599184 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 05 21:34:40 compute-0 nova_compute[186018]: 2026-01-05 21:34:40.591 186022 DEBUG nova.network.neutron [-] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:34:40 compute-0 nova_compute[186018]: 2026-01-05 21:34:40.612 186022 INFO nova.compute.manager [-] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Took 1.03 seconds to deallocate network for instance.
Jan 05 21:34:40 compute-0 nova_compute[186018]: 2026-01-05 21:34:40.673 186022 DEBUG oslo_concurrency.lockutils [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:40 compute-0 nova_compute[186018]: 2026-01-05 21:34:40.674 186022 DEBUG oslo_concurrency.lockutils [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:40 compute-0 nova_compute[186018]: 2026-01-05 21:34:40.803 186022 DEBUG nova.compute.provider_tree [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:34:40 compute-0 nova_compute[186018]: 2026-01-05 21:34:40.821 186022 DEBUG nova.scheduler.client.report [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:34:40 compute-0 nova_compute[186018]: 2026-01-05 21:34:40.846 186022 DEBUG oslo_concurrency.lockutils [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:40 compute-0 nova_compute[186018]: 2026-01-05 21:34:40.897 186022 INFO nova.scheduler.client.report [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Deleted allocations for instance 8123e49e-6aaf-4e97-9f0e-4039061d12d3
Jan 05 21:34:40 compute-0 nova_compute[186018]: 2026-01-05 21:34:40.961 186022 DEBUG oslo_concurrency.lockutils [None req-e8ac024f-9e8a-40f6-bc7a-c4b19bf61b26 1b776719a870485db8e8ec3697bac537 f530e5001be644ada25ea22d2fc918bb - - default default] Lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.743s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:41 compute-0 ovn_controller[98229]: 2026-01-05T21:34:41Z|00142|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:34:41 compute-0 ovn_controller[98229]: 2026-01-05T21:34:41Z|00143|binding|INFO|Releasing lport 68b7e7cf-3a36-4106-85be-cc39d85ff653 from this chassis (sb_readonly=0)
Jan 05 21:34:41 compute-0 nova_compute[186018]: 2026-01-05 21:34:41.189 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:41 compute-0 podman[253669]: 2026-01-05 21:34:41.719337077 +0000 UTC m=+0.072627993 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, config_id=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 21:34:42 compute-0 nova_compute[186018]: 2026-01-05 21:34:41.999 186022 DEBUG nova.compute.manager [req-18d185df-ce0a-459b-a62f-34627899ed6c req-603705f7-4ae7-48da-8606-73c89880ca13 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Received event network-vif-plugged-8a773115-5cfe-4366-97f0-643e66599184 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:34:42 compute-0 nova_compute[186018]: 2026-01-05 21:34:42.004 186022 DEBUG oslo_concurrency.lockutils [req-18d185df-ce0a-459b-a62f-34627899ed6c req-603705f7-4ae7-48da-8606-73c89880ca13 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:42 compute-0 nova_compute[186018]: 2026-01-05 21:34:42.005 186022 DEBUG oslo_concurrency.lockutils [req-18d185df-ce0a-459b-a62f-34627899ed6c req-603705f7-4ae7-48da-8606-73c89880ca13 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:42 compute-0 nova_compute[186018]: 2026-01-05 21:34:42.006 186022 DEBUG oslo_concurrency.lockutils [req-18d185df-ce0a-459b-a62f-34627899ed6c req-603705f7-4ae7-48da-8606-73c89880ca13 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "8123e49e-6aaf-4e97-9f0e-4039061d12d3-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:42 compute-0 nova_compute[186018]: 2026-01-05 21:34:42.007 186022 DEBUG nova.compute.manager [req-18d185df-ce0a-459b-a62f-34627899ed6c req-603705f7-4ae7-48da-8606-73c89880ca13 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] No waiting events found dispatching network-vif-plugged-8a773115-5cfe-4366-97f0-643e66599184 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:34:42 compute-0 nova_compute[186018]: 2026-01-05 21:34:42.008 186022 WARNING nova.compute.manager [req-18d185df-ce0a-459b-a62f-34627899ed6c req-603705f7-4ae7-48da-8606-73c89880ca13 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Received unexpected event network-vif-plugged-8a773115-5cfe-4366-97f0-643e66599184 for instance with vm_state deleted and task_state None.
Jan 05 21:34:42 compute-0 nova_compute[186018]: 2026-01-05 21:34:42.009 186022 DEBUG nova.compute.manager [req-18d185df-ce0a-459b-a62f-34627899ed6c req-603705f7-4ae7-48da-8606-73c89880ca13 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Received event network-vif-deleted-8a773115-5cfe-4366-97f0-643e66599184 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:34:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:42.873 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:42.873 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:42.874 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:42 compute-0 nova_compute[186018]: 2026-01-05 21:34:42.964 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:44 compute-0 nova_compute[186018]: 2026-01-05 21:34:44.523 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:44 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:34:44.718 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '15'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:34:45 compute-0 ovn_controller[98229]: 2026-01-05T21:34:45Z|00144|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:34:45 compute-0 ovn_controller[98229]: 2026-01-05T21:34:45Z|00145|binding|INFO|Releasing lport 68b7e7cf-3a36-4106-85be-cc39d85ff653 from this chassis (sb_readonly=0)
Jan 05 21:34:45 compute-0 nova_compute[186018]: 2026-01-05 21:34:45.805 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:47 compute-0 nova_compute[186018]: 2026-01-05 21:34:47.968 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:48 compute-0 nova_compute[186018]: 2026-01-05 21:34:48.202 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:49 compute-0 nova_compute[186018]: 2026-01-05 21:34:49.137 186022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767648874.1359534, 1c4634a9-de38-4683-abb9-3964b285a21c => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:34:49 compute-0 nova_compute[186018]: 2026-01-05 21:34:49.139 186022 INFO nova.compute.manager [-] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] VM Stopped (Lifecycle Event)
Jan 05 21:34:49 compute-0 nova_compute[186018]: 2026-01-05 21:34:49.164 186022 DEBUG nova.compute.manager [None req-df52e18c-313b-4683-9889-f4448abbe904 - - - - - -] [instance: 1c4634a9-de38-4683-abb9-3964b285a21c] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:34:49 compute-0 nova_compute[186018]: 2026-01-05 21:34:49.527 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:50 compute-0 ovn_controller[98229]: 2026-01-05T21:34:50Z|00146|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:34:50 compute-0 ovn_controller[98229]: 2026-01-05T21:34:50Z|00147|binding|INFO|Releasing lport 68b7e7cf-3a36-4106-85be-cc39d85ff653 from this chassis (sb_readonly=0)
Jan 05 21:34:50 compute-0 nova_compute[186018]: 2026-01-05 21:34:50.253 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:50 compute-0 podman[253691]: 2026-01-05 21:34:50.767451475 +0000 UTC m=+0.090710049 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, build-date=2025-08-20T13:12:41, distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git)
Jan 05 21:34:50 compute-0 podman[253690]: 2026-01-05 21:34:50.784215477 +0000 UTC m=+0.125654589 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:34:52 compute-0 nova_compute[186018]: 2026-01-05 21:34:52.976 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:54 compute-0 nova_compute[186018]: 2026-01-05 21:34:54.399 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:54 compute-0 nova_compute[186018]: 2026-01-05 21:34:54.491 186022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767648879.4899278, 8123e49e-6aaf-4e97-9f0e-4039061d12d3 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:34:54 compute-0 nova_compute[186018]: 2026-01-05 21:34:54.492 186022 INFO nova.compute.manager [-] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] VM Stopped (Lifecycle Event)
Jan 05 21:34:54 compute-0 nova_compute[186018]: 2026-01-05 21:34:54.513 186022 DEBUG nova.compute.manager [None req-4ff5566e-b44b-4840-9edc-84fa0fb9cf4c - - - - - -] [instance: 8123e49e-6aaf-4e97-9f0e-4039061d12d3] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:34:54 compute-0 nova_compute[186018]: 2026-01-05 21:34:54.532 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:54 compute-0 podman[253739]: 2026-01-05 21:34:54.716599049 +0000 UTC m=+0.063815271 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 05 21:34:54 compute-0 podman[253740]: 2026-01-05 21:34:54.729533189 +0000 UTC m=+0.069110310 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:34:57 compute-0 nova_compute[186018]: 2026-01-05 21:34:57.834 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Acquiring lock "74ea9feb-891e-457f-9b12-7cd606300eb0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:57 compute-0 nova_compute[186018]: 2026-01-05 21:34:57.836 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "74ea9feb-891e-457f-9b12-7cd606300eb0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:57 compute-0 nova_compute[186018]: 2026-01-05 21:34:57.858 186022 DEBUG nova.compute.manager [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 21:34:57 compute-0 nova_compute[186018]: 2026-01-05 21:34:57.939 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:57 compute-0 nova_compute[186018]: 2026-01-05 21:34:57.940 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:57 compute-0 nova_compute[186018]: 2026-01-05 21:34:57.949 186022 DEBUG nova.virt.hardware [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 21:34:57 compute-0 nova_compute[186018]: 2026-01-05 21:34:57.949 186022 INFO nova.compute.claims [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Claim successful on node compute-0.ctlplane.example.com
Jan 05 21:34:57 compute-0 nova_compute[186018]: 2026-01-05 21:34:57.978 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.208 186022 DEBUG nova.compute.provider_tree [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.228 186022 DEBUG nova.scheduler.client.report [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.255 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.315s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.256 186022 DEBUG nova.compute.manager [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.305 186022 DEBUG nova.compute.manager [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.307 186022 DEBUG nova.network.neutron [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.395 186022 INFO nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.536 186022 DEBUG nova.compute.manager [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.622 186022 DEBUG nova.compute.manager [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.624 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.625 186022 INFO nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Creating image(s)
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.626 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Acquiring lock "/var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.627 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "/var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.629 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "/var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.648 186022 DEBUG oslo_concurrency.processutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.745 186022 DEBUG oslo_concurrency.processutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.747 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Acquiring lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.747 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.758 186022 DEBUG oslo_concurrency.processutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.777 186022 DEBUG nova.policy [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '519a606c2c0e4a39af7e481bfbbd000f', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '5e82df0d09c6419691e0e609dd7250ec', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.820 186022 DEBUG oslo_concurrency.processutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.822 186022 DEBUG oslo_concurrency.processutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe,backing_fmt=raw /var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.877 186022 DEBUG oslo_concurrency.processutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe,backing_fmt=raw /var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0/disk 1073741824" returned: 0 in 0.054s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.879 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "3af50d8a112e7e4ff38bfa89796d95124b9e14fe" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.132s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.880 186022 DEBUG oslo_concurrency.processutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.939 186022 DEBUG oslo_concurrency.processutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.941 186022 DEBUG nova.virt.disk.api [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Checking if we can resize image /var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 21:34:58 compute-0 nova_compute[186018]: 2026-01-05 21:34:58.942 186022 DEBUG oslo_concurrency.processutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:34:59 compute-0 nova_compute[186018]: 2026-01-05 21:34:59.003 186022 DEBUG oslo_concurrency.processutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:34:59 compute-0 nova_compute[186018]: 2026-01-05 21:34:59.004 186022 DEBUG nova.virt.disk.api [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Cannot resize image /var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 21:34:59 compute-0 nova_compute[186018]: 2026-01-05 21:34:59.004 186022 DEBUG nova.objects.instance [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lazy-loading 'migration_context' on Instance uuid 74ea9feb-891e-457f-9b12-7cd606300eb0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:34:59 compute-0 nova_compute[186018]: 2026-01-05 21:34:59.023 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 21:34:59 compute-0 nova_compute[186018]: 2026-01-05 21:34:59.024 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Ensure instance console log exists: /var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 21:34:59 compute-0 nova_compute[186018]: 2026-01-05 21:34:59.026 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:34:59 compute-0 nova_compute[186018]: 2026-01-05 21:34:59.027 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:34:59 compute-0 nova_compute[186018]: 2026-01-05 21:34:59.028 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:34:59 compute-0 nova_compute[186018]: 2026-01-05 21:34:59.535 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:34:59 compute-0 nova_compute[186018]: 2026-01-05 21:34:59.722 186022 DEBUG nova.network.neutron [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Successfully created port: d22fe4de-12eb-4fe6-9885-e160892739a4 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 05 21:34:59 compute-0 podman[202426]: time="2026-01-05T21:34:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:34:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:34:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:34:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:34:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4832 "" "Go-http-client/1.1"
Jan 05 21:35:00 compute-0 nova_compute[186018]: 2026-01-05 21:35:00.506 186022 DEBUG nova.network.neutron [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Successfully updated port: d22fe4de-12eb-4fe6-9885-e160892739a4 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 21:35:00 compute-0 nova_compute[186018]: 2026-01-05 21:35:00.521 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Acquiring lock "refresh_cache-74ea9feb-891e-457f-9b12-7cd606300eb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:35:00 compute-0 nova_compute[186018]: 2026-01-05 21:35:00.522 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Acquired lock "refresh_cache-74ea9feb-891e-457f-9b12-7cd606300eb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:35:00 compute-0 nova_compute[186018]: 2026-01-05 21:35:00.523 186022 DEBUG nova.network.neutron [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 21:35:00 compute-0 nova_compute[186018]: 2026-01-05 21:35:00.675 186022 DEBUG nova.compute.manager [req-68cff96c-a042-4b4f-a7a5-bb4e2d8406ea req-7ce87a39-1de2-466f-9173-1893e34ba35a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Received event network-changed-d22fe4de-12eb-4fe6-9885-e160892739a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:35:00 compute-0 nova_compute[186018]: 2026-01-05 21:35:00.677 186022 DEBUG nova.compute.manager [req-68cff96c-a042-4b4f-a7a5-bb4e2d8406ea req-7ce87a39-1de2-466f-9173-1893e34ba35a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Refreshing instance network info cache due to event network-changed-d22fe4de-12eb-4fe6-9885-e160892739a4. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:35:00 compute-0 nova_compute[186018]: 2026-01-05 21:35:00.678 186022 DEBUG oslo_concurrency.lockutils [req-68cff96c-a042-4b4f-a7a5-bb4e2d8406ea req-7ce87a39-1de2-466f-9173-1893e34ba35a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-74ea9feb-891e-457f-9b12-7cd606300eb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:35:00 compute-0 nova_compute[186018]: 2026-01-05 21:35:00.702 186022 DEBUG nova.network.neutron [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 21:35:00 compute-0 nova_compute[186018]: 2026-01-05 21:35:00.880 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:01 compute-0 openstack_network_exporter[205720]: ERROR   21:35:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:35:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:35:01 compute-0 openstack_network_exporter[205720]: ERROR   21:35:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:35:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.146 186022 DEBUG nova.network.neutron [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Updating instance_info_cache with network_info: [{"id": "d22fe4de-12eb-4fe6-9885-e160892739a4", "address": "fa:16:3e:52:08:85", "network": {"id": "89881152-7c99-468f-be06-08b9052e078d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2002553524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5e82df0d09c6419691e0e609dd7250ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd22fe4de-12", "ovs_interfaceid": "d22fe4de-12eb-4fe6-9885-e160892739a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.164 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Releasing lock "refresh_cache-74ea9feb-891e-457f-9b12-7cd606300eb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.165 186022 DEBUG nova.compute.manager [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Instance network_info: |[{"id": "d22fe4de-12eb-4fe6-9885-e160892739a4", "address": "fa:16:3e:52:08:85", "network": {"id": "89881152-7c99-468f-be06-08b9052e078d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2002553524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5e82df0d09c6419691e0e609dd7250ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd22fe4de-12", "ovs_interfaceid": "d22fe4de-12eb-4fe6-9885-e160892739a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.165 186022 DEBUG oslo_concurrency.lockutils [req-68cff96c-a042-4b4f-a7a5-bb4e2d8406ea req-7ce87a39-1de2-466f-9173-1893e34ba35a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-74ea9feb-891e-457f-9b12-7cd606300eb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.166 186022 DEBUG nova.network.neutron [req-68cff96c-a042-4b4f-a7a5-bb4e2d8406ea req-7ce87a39-1de2-466f-9173-1893e34ba35a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Refreshing network info cache for port d22fe4de-12eb-4fe6-9885-e160892739a4 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.168 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Start _get_guest_xml network_info=[{"id": "d22fe4de-12eb-4fe6-9885-e160892739a4", "address": "fa:16:3e:52:08:85", "network": {"id": "89881152-7c99-468f-be06-08b9052e078d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2002553524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5e82df0d09c6419691e0e609dd7250ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd22fe4de-12", "ovs_interfaceid": "d22fe4de-12eb-4fe6-9885-e160892739a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:29:29Z,direct_url=<?>,disk_format='qcow2',id=ebb2027f-05a6-465a-af75-b7da40a91332,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:29:30Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'encrypted': False, 'encryption_format': None, 'image_id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.177 186022 WARNING nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.184 186022 DEBUG nova.virt.libvirt.host [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.185 186022 DEBUG nova.virt.libvirt.host [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.195 186022 DEBUG nova.virt.libvirt.host [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.196 186022 DEBUG nova.virt.libvirt.host [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.197 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.197 186022 DEBUG nova.virt.hardware [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T21:29:28Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ce1138a2-4b82-4664-8860-711a956c0882',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:29:29Z,direct_url=<?>,disk_format='qcow2',id=ebb2027f-05a6-465a-af75-b7da40a91332,min_disk=0,min_ram=0,name='cirros-0.6.2-x86_64-disk.img',owner='704814115a61471f9b45484171f67b5f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:29:30Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.198 186022 DEBUG nova.virt.hardware [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.199 186022 DEBUG nova.virt.hardware [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.199 186022 DEBUG nova.virt.hardware [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.200 186022 DEBUG nova.virt.hardware [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.200 186022 DEBUG nova.virt.hardware [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.201 186022 DEBUG nova.virt.hardware [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.201 186022 DEBUG nova.virt.hardware [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.201 186022 DEBUG nova.virt.hardware [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.202 186022 DEBUG nova.virt.hardware [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.202 186022 DEBUG nova.virt.hardware [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.206 186022 DEBUG nova.virt.libvirt.vif [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:34:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-595545070',display_name='tempest-ServerAddressesTestJSON-server-595545070',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-595545070',id=12,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5e82df0d09c6419691e0e609dd7250ec',ramdisk_id='',reservation_id='r-cd0q8sp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-944728323',owner_user_name='tempest-ServerAddressesTestJSON-944728323-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:34:58Z,user_data=None,user_id='519a606c2c0e4a39af7e481bfbbd000f',uuid=74ea9feb-891e-457f-9b12-7cd606300eb0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d22fe4de-12eb-4fe6-9885-e160892739a4", "address": "fa:16:3e:52:08:85", "network": {"id": "89881152-7c99-468f-be06-08b9052e078d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2002553524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5e82df0d09c6419691e0e609dd7250ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd22fe4de-12", "ovs_interfaceid": "d22fe4de-12eb-4fe6-9885-e160892739a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.207 186022 DEBUG nova.network.os_vif_util [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Converting VIF {"id": "d22fe4de-12eb-4fe6-9885-e160892739a4", "address": "fa:16:3e:52:08:85", "network": {"id": "89881152-7c99-468f-be06-08b9052e078d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2002553524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5e82df0d09c6419691e0e609dd7250ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd22fe4de-12", "ovs_interfaceid": "d22fe4de-12eb-4fe6-9885-e160892739a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.208 186022 DEBUG nova.network.os_vif_util [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:08:85,bridge_name='br-int',has_traffic_filtering=True,id=d22fe4de-12eb-4fe6-9885-e160892739a4,network=Network(89881152-7c99-468f-be06-08b9052e078d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd22fe4de-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.209 186022 DEBUG nova.objects.instance [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lazy-loading 'pci_devices' on Instance uuid 74ea9feb-891e-457f-9b12-7cd606300eb0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.224 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] End _get_guest_xml xml=<domain type="kvm">
Jan 05 21:35:02 compute-0 nova_compute[186018]:   <uuid>74ea9feb-891e-457f-9b12-7cd606300eb0</uuid>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   <name>instance-0000000c</name>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   <memory>131072</memory>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   <vcpu>1</vcpu>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   <metadata>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <nova:name>tempest-ServerAddressesTestJSON-server-595545070</nova:name>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <nova:creationTime>2026-01-05 21:35:02</nova:creationTime>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <nova:flavor name="m1.nano">
Jan 05 21:35:02 compute-0 nova_compute[186018]:         <nova:memory>128</nova:memory>
Jan 05 21:35:02 compute-0 nova_compute[186018]:         <nova:disk>1</nova:disk>
Jan 05 21:35:02 compute-0 nova_compute[186018]:         <nova:swap>0</nova:swap>
Jan 05 21:35:02 compute-0 nova_compute[186018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 05 21:35:02 compute-0 nova_compute[186018]:         <nova:vcpus>1</nova:vcpus>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       </nova:flavor>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <nova:owner>
Jan 05 21:35:02 compute-0 nova_compute[186018]:         <nova:user uuid="519a606c2c0e4a39af7e481bfbbd000f">tempest-ServerAddressesTestJSON-944728323-project-member</nova:user>
Jan 05 21:35:02 compute-0 nova_compute[186018]:         <nova:project uuid="5e82df0d09c6419691e0e609dd7250ec">tempest-ServerAddressesTestJSON-944728323</nova:project>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       </nova:owner>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <nova:root type="image" uuid="ebb2027f-05a6-465a-af75-b7da40a91332"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <nova:ports>
Jan 05 21:35:02 compute-0 nova_compute[186018]:         <nova:port uuid="d22fe4de-12eb-4fe6-9885-e160892739a4">
Jan 05 21:35:02 compute-0 nova_compute[186018]:           <nova:ip type="fixed" address="10.100.0.12" ipVersion="4"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:         </nova:port>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       </nova:ports>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     </nova:instance>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   </metadata>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   <sysinfo type="smbios">
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <system>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <entry name="manufacturer">RDO</entry>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <entry name="product">OpenStack Compute</entry>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <entry name="serial">74ea9feb-891e-457f-9b12-7cd606300eb0</entry>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <entry name="uuid">74ea9feb-891e-457f-9b12-7cd606300eb0</entry>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <entry name="family">Virtual Machine</entry>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     </system>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   </sysinfo>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   <os>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <boot dev="hd"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <smbios mode="sysinfo"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   </os>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   <features>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <acpi/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <apic/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <vmcoreinfo/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   </features>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   <clock offset="utc">
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <timer name="hpet" present="no"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   </clock>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   <cpu mode="host-model" match="exact">
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   </cpu>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   <devices>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0/disk"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <target dev="vda" bus="virtio"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <disk type="file" device="cdrom">
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0/disk.config"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <target dev="sda" bus="sata"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <interface type="ethernet">
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <mac address="fa:16:3e:52:08:85"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <mtu size="1442"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <target dev="tapd22fe4de-12"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     </interface>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <serial type="pty">
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <log file="/var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0/console.log" append="off"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     </serial>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <video>
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     </video>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <input type="tablet" bus="usb"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <rng model="virtio">
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <backend model="random">/dev/urandom</backend>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     </rng>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <controller type="usb" index="0"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     <memballoon model="virtio">
Jan 05 21:35:02 compute-0 nova_compute[186018]:       <stats period="10"/>
Jan 05 21:35:02 compute-0 nova_compute[186018]:     </memballoon>
Jan 05 21:35:02 compute-0 nova_compute[186018]:   </devices>
Jan 05 21:35:02 compute-0 nova_compute[186018]: </domain>
Jan 05 21:35:02 compute-0 nova_compute[186018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.233 186022 DEBUG nova.compute.manager [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Preparing to wait for external event network-vif-plugged-d22fe4de-12eb-4fe6-9885-e160892739a4 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.234 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Acquiring lock "74ea9feb-891e-457f-9b12-7cd606300eb0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.235 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "74ea9feb-891e-457f-9b12-7cd606300eb0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.235 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "74ea9feb-891e-457f-9b12-7cd606300eb0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.236 186022 DEBUG nova.virt.libvirt.vif [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:34:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-595545070',display_name='tempest-ServerAddressesTestJSON-server-595545070',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-595545070',id=12,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='5e82df0d09c6419691e0e609dd7250ec',ramdisk_id='',reservation_id='r-cd0q8sp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_hw_rng_model='virtio',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-ServerAddressesTestJSON-944728323',owner_user_name='tempest-ServerAddressesTestJSON-944728323-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:34:58Z,user_data=None,user_id='519a606c2c0e4a39af7e481bfbbd000f',uuid=74ea9feb-891e-457f-9b12-7cd606300eb0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "d22fe4de-12eb-4fe6-9885-e160892739a4", "address": "fa:16:3e:52:08:85", "network": {"id": "89881152-7c99-468f-be06-08b9052e078d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2002553524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5e82df0d09c6419691e0e609dd7250ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd22fe4de-12", "ovs_interfaceid": "d22fe4de-12eb-4fe6-9885-e160892739a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.237 186022 DEBUG nova.network.os_vif_util [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Converting VIF {"id": "d22fe4de-12eb-4fe6-9885-e160892739a4", "address": "fa:16:3e:52:08:85", "network": {"id": "89881152-7c99-468f-be06-08b9052e078d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2002553524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5e82df0d09c6419691e0e609dd7250ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd22fe4de-12", "ovs_interfaceid": "d22fe4de-12eb-4fe6-9885-e160892739a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.237 186022 DEBUG nova.network.os_vif_util [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:08:85,bridge_name='br-int',has_traffic_filtering=True,id=d22fe4de-12eb-4fe6-9885-e160892739a4,network=Network(89881152-7c99-468f-be06-08b9052e078d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd22fe4de-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.238 186022 DEBUG os_vif [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:08:85,bridge_name='br-int',has_traffic_filtering=True,id=d22fe4de-12eb-4fe6-9885-e160892739a4,network=Network(89881152-7c99-468f-be06-08b9052e078d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd22fe4de-12') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.239 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.240 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.240 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.244 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.244 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapd22fe4de-12, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.245 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tapd22fe4de-12, col_values=(('external_ids', {'iface-id': 'd22fe4de-12eb-4fe6-9885-e160892739a4', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:52:08:85', 'vm-uuid': '74ea9feb-891e-457f-9b12-7cd606300eb0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:35:02 compute-0 NetworkManager[56598]: <info>  [1767648902.2483] manager: (tapd22fe4de-12): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/64)
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.250 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.257 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.258 186022 INFO os_vif [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:08:85,bridge_name='br-int',has_traffic_filtering=True,id=d22fe4de-12eb-4fe6-9885-e160892739a4,network=Network(89881152-7c99-468f-be06-08b9052e078d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd22fe4de-12')
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.320 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.320 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.321 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] No VIF found with MAC fa:16:3e:52:08:85, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.322 186022 INFO nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Using config drive
Jan 05 21:35:02 compute-0 podman[253800]: 2026-01-05 21:35:02.363898076 +0000 UTC m=+0.067954490 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.657 186022 INFO nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Creating config drive at /var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0/disk.config
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.663 186022 DEBUG oslo_concurrency.processutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvf3w1ads execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.789 186022 DEBUG oslo_concurrency.processutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvf3w1ads" returned: 0 in 0.126s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:35:02 compute-0 kernel: tapd22fe4de-12: entered promiscuous mode
Jan 05 21:35:02 compute-0 NetworkManager[56598]: <info>  [1767648902.8572] manager: (tapd22fe4de-12): new Tun device (/org/freedesktop/NetworkManager/Devices/65)
Jan 05 21:35:02 compute-0 ovn_controller[98229]: 2026-01-05T21:35:02Z|00148|binding|INFO|Claiming lport d22fe4de-12eb-4fe6-9885-e160892739a4 for this chassis.
Jan 05 21:35:02 compute-0 ovn_controller[98229]: 2026-01-05T21:35:02Z|00149|binding|INFO|d22fe4de-12eb-4fe6-9885-e160892739a4: Claiming fa:16:3e:52:08:85 10.100.0.12
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.860 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:02.871 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:08:85 10.100.0.12'], port_security=['fa:16:3e:52:08:85 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '74ea9feb-891e-457f-9b12-7cd606300eb0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-89881152-7c99-468f-be06-08b9052e078d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e82df0d09c6419691e0e609dd7250ec', 'neutron:revision_number': '2', 'neutron:security_group_ids': '48baaba1-4fe0-4d4a-9a74-269f5f3eff54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4a15241-2197-4a15-9488-3ad83ccf88ed, chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=d22fe4de-12eb-4fe6-9885-e160892739a4) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:35:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:02.873 107689 INFO neutron.agent.ovn.metadata.agent [-] Port d22fe4de-12eb-4fe6-9885-e160892739a4 in datapath 89881152-7c99-468f-be06-08b9052e078d bound to our chassis
Jan 05 21:35:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:02.875 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network 89881152-7c99-468f-be06-08b9052e078d
Jan 05 21:35:02 compute-0 ovn_controller[98229]: 2026-01-05T21:35:02Z|00150|binding|INFO|Setting lport d22fe4de-12eb-4fe6-9885-e160892739a4 up in Southbound
Jan 05 21:35:02 compute-0 ovn_controller[98229]: 2026-01-05T21:35:02Z|00151|binding|INFO|Setting lport d22fe4de-12eb-4fe6-9885-e160892739a4 ovn-installed in OVS
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.885 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:02.889 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[34b978f0-ffc8-4f3c-a46e-ab22770d807f]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:02.890 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Creating VETH tap89881152-71 in ovnmeta-89881152-7c99-468f-be06-08b9052e078d namespace provision_datapath /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:665
Jan 05 21:35:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:02.892 240489 DEBUG neutron.privileged.agent.linux.ip_lib [-] Interface tap89881152-70 not found in namespace None get_link_id /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:204
Jan 05 21:35:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:02.892 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[fb9d2745-2d02-4e09-8466-36c427a9516e]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:02.893 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[8e1ff214-7e68-4019-a38b-87e67b007ecc]: (4, False) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:02 compute-0 systemd-machined[157312]: New machine qemu-13-instance-0000000c.
Jan 05 21:35:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:02.911 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[0d6ab037-94c3-4174-8a81-2796bd1ca206]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:02 compute-0 systemd[1]: Started Virtual Machine qemu-13-instance-0000000c.
Jan 05 21:35:02 compute-0 systemd-udevd[253844]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:35:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:02.944 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[e2037545-e9de-4fa8-90a2-c295b5cc680e]: (4, ('net.ipv4.conf.all.promote_secondaries = 1\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:02 compute-0 NetworkManager[56598]: <info>  [1767648902.9503] device (tapd22fe4de-12): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 21:35:02 compute-0 NetworkManager[56598]: <info>  [1767648902.9508] device (tapd22fe4de-12): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 21:35:02 compute-0 nova_compute[186018]: 2026-01-05 21:35:02.980 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:02 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:02.989 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[b77753fc-1c10-4b4c-8b48-007f2a46b9c2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:03 compute-0 NetworkManager[56598]: <info>  [1767648903.0014] manager: (tap89881152-70): new Veth device (/org/freedesktop/NetworkManager/Devices/66)
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.001 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[d3d3062d-76d9-4b5d-9bfe-1c5eb1a6b0a6]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.036 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[631e172c-f3da-46de-8518-cd3f05848604]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.040 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[2f172d06-1e72-4e04-97e6-34fac5ada11e]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:03 compute-0 NetworkManager[56598]: <info>  [1767648903.0652] device (tap89881152-70): carrier: link connected
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.072 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[d57e9fd6-643d-470d-b1ed-7172c6918fd9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.093 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[22335d31-9091-4ebd-bf29-c093e8bc3bd6]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap89881152-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:4a:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564376, 'reachable_time': 40816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 253874, 'error': None, 'target': 'ovnmeta-89881152-7c99-468f-be06-08b9052e078d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.120 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[821bb731-0162-48c5-8523-9575d46c26b7]: (4, ({'family': 10, 'prefixlen': 64, 'flags': 192, 'scope': 253, 'index': 2, 'attrs': [['IFA_ADDRESS', 'fe80::f816:3eff:fef5:4a9d'], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 564376, 'tstamp': 564376}], ['IFA_FLAGS', 192]], 'header': {'length': 72, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 253876, 'error': None, 'target': 'ovnmeta-89881152-7c99-468f-be06-08b9052e078d', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'},)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.141 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[a3f4602f-2f69-4b94-8827-0c762dbd024a]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tap89881152-71'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:f5:4a:9d'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 90, 'tx_bytes': 90, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 40], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564376, 'reachable_time': 40816, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 1, 'inoctets': 76, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 1, 'outoctets': 76, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 1, 'outmcastpkts': 1, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 76, 'outmcastoctets': 76, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 1, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 1, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 0, 'sequence_number': 255, 'pid': 253882, 'error': None, 'target': 'ovnmeta-89881152-7c99-468f-be06-08b9052e078d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.172 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[6606c3dd-d0d4-484e-9232-67d1cb565fcc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.222 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648903.222366, 74ea9feb-891e-457f-9b12-7cd606300eb0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.224 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] VM Started (Lifecycle Event)
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.235 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[40790222-f252-498b-8ca4-e8548b4ff760]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.237 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89881152-70, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.237 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.238 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap89881152-70, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:35:03 compute-0 kernel: tap89881152-70: entered promiscuous mode
Jan 05 21:35:03 compute-0 NetworkManager[56598]: <info>  [1767648903.2411] manager: (tap89881152-70): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/67)
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.245 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.245 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tap89881152-70, col_values=(('external_ids', {'iface-id': 'e15a4cad-ae85-4535-a76d-0c6c736fb257'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:35:03 compute-0 ovn_controller[98229]: 2026-01-05T21:35:03Z|00152|binding|INFO|Releasing lport e15a4cad-ae85-4535-a76d-0c6c736fb257 from this chassis (sb_readonly=0)
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.249 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.249 107689 DEBUG neutron.agent.linux.utils [-] Unable to access /var/lib/neutron/external/pids/89881152-7c99-468f-be06-08b9052e078d.pid.haproxy; Error: [Errno 2] No such file or directory: '/var/lib/neutron/external/pids/89881152-7c99-468f-be06-08b9052e078d.pid.haproxy' get_value_from_file /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:252
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.250 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[a77676ef-5fd5-4324-94db-08206ce3b98b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.252 107689 DEBUG neutron.agent.ovn.metadata.driver [-] haproxy_cfg = 
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: global
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     log         /dev/log local0 debug
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     log-tag     haproxy-metadata-proxy-89881152-7c99-468f-be06-08b9052e078d
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     user        root
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     group       root
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     maxconn     1024
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     pidfile     /var/lib/neutron/external/pids/89881152-7c99-468f-be06-08b9052e078d.pid.haproxy
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     daemon
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: defaults
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     log global
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     mode http
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     option httplog
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     option dontlognull
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     option http-server-close
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     option forwardfor
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     retries                 3
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     timeout http-request    30s
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     timeout connect         30s
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     timeout client          32s
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     timeout server          32s
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     timeout http-keep-alive 30s
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: listen listener
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     bind 169.254.169.254:80
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     server metadata /var/lib/neutron/metadata_proxy
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:     http-request add-header X-OVN-Network-ID 89881152-7c99-468f-be06-08b9052e078d
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]:  create_config_file /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/driver.py:107
Jan 05 21:35:03 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:03.252 107689 DEBUG neutron.agent.linux.utils [-] Running command: ['sudo', 'neutron-rootwrap', '/etc/neutron/rootwrap.conf', 'ip', 'netns', 'exec', 'ovnmeta-89881152-7c99-468f-be06-08b9052e078d', 'env', 'PROCESS_TAG=haproxy-89881152-7c99-468f-be06-08b9052e078d', 'haproxy', '-f', '/var/lib/neutron/ovn-metadata-proxy/89881152-7c99-468f-be06-08b9052e078d.conf'] create_process /usr/lib/python3.9/site-packages/neutron/agent/linux/utils.py:84
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.267 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.335 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.342 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648903.2224817, 74ea9feb-891e-457f-9b12-7cd606300eb0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.343 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] VM Paused (Lifecycle Event)
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.359 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.365 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.383 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:35:03 compute-0 podman[253914]: 2026-01-05 21:35:03.732289134 +0000 UTC m=+0.079691559 container create 4096b2f9b5e6c7210e982f808f697be7676a3ab92d7d456a6ffba4fa7be511ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-89881152-7c99-468f-be06-08b9052e078d, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 05 21:35:03 compute-0 podman[253914]: 2026-01-05 21:35:03.691342756 +0000 UTC m=+0.038745221 image pull 3695f0466b4af47afdf4b467956f8cc4744d7249671a73e7ca3fd26cca2f59c3 quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified
Jan 05 21:35:03 compute-0 systemd[1]: Started libpod-conmon-4096b2f9b5e6c7210e982f808f697be7676a3ab92d7d456a6ffba4fa7be511ac.scope.
Jan 05 21:35:03 compute-0 systemd[1]: Started libcrun container.
Jan 05 21:35:03 compute-0 kernel: xfs filesystem being remounted at /var/lib/containers/storage/overlay/11342d4777d80a838b1f637d381b43206f89207ddee44a072f6c35294c209eed/merged/var/lib/neutron supports timestamps until 2038 (0x7fffffff)
Jan 05 21:35:03 compute-0 podman[253914]: 2026-01-05 21:35:03.865141452 +0000 UTC m=+0.212543907 container init 4096b2f9b5e6c7210e982f808f697be7676a3ab92d7d456a6ffba4fa7be511ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-89881152-7c99-468f-be06-08b9052e078d, org.label-schema.build-date=20251202, tcib_managed=true, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 05 21:35:03 compute-0 podman[253914]: 2026-01-05 21:35:03.874098168 +0000 UTC m=+0.221500603 container start 4096b2f9b5e6c7210e982f808f697be7676a3ab92d7d456a6ffba4fa7be511ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-89881152-7c99-468f-be06-08b9052e078d, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, tcib_managed=true)
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.897 186022 DEBUG nova.compute.manager [req-793b09aa-b1d8-4d5e-a18e-107f73347df2 req-242bb398-abe3-462f-ab1f-c3441c6955e1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Received event network-vif-plugged-d22fe4de-12eb-4fe6-9885-e160892739a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.899 186022 DEBUG oslo_concurrency.lockutils [req-793b09aa-b1d8-4d5e-a18e-107f73347df2 req-242bb398-abe3-462f-ab1f-c3441c6955e1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "74ea9feb-891e-457f-9b12-7cd606300eb0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.899 186022 DEBUG oslo_concurrency.lockutils [req-793b09aa-b1d8-4d5e-a18e-107f73347df2 req-242bb398-abe3-462f-ab1f-c3441c6955e1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "74ea9feb-891e-457f-9b12-7cd606300eb0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.899 186022 DEBUG oslo_concurrency.lockutils [req-793b09aa-b1d8-4d5e-a18e-107f73347df2 req-242bb398-abe3-462f-ab1f-c3441c6955e1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "74ea9feb-891e-457f-9b12-7cd606300eb0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.900 186022 DEBUG nova.compute.manager [req-793b09aa-b1d8-4d5e-a18e-107f73347df2 req-242bb398-abe3-462f-ab1f-c3441c6955e1 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Processing event network-vif-plugged-d22fe4de-12eb-4fe6-9885-e160892739a4 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.901 186022 DEBUG nova.compute.manager [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 21:35:03 compute-0 neutron-haproxy-ovnmeta-89881152-7c99-468f-be06-08b9052e078d[253929]: [NOTICE]   (253933) : New worker (253935) forked
Jan 05 21:35:03 compute-0 neutron-haproxy-ovnmeta-89881152-7c99-468f-be06-08b9052e078d[253929]: [NOTICE]   (253933) : Loading success.
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.916 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767648903.9159288, 74ea9feb-891e-457f-9b12-7cd606300eb0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.917 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] VM Resumed (Lifecycle Event)
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.919 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.926 186022 INFO nova.virt.libvirt.driver [-] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Instance spawned successfully.
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.926 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.940 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.945 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.960 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.960 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.960 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.961 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.961 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.961 186022 DEBUG nova.virt.libvirt.driver [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:35:03 compute-0 nova_compute[186018]: 2026-01-05 21:35:03.974 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:35:04 compute-0 nova_compute[186018]: 2026-01-05 21:35:04.025 186022 INFO nova.compute.manager [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Took 5.40 seconds to spawn the instance on the hypervisor.
Jan 05 21:35:04 compute-0 nova_compute[186018]: 2026-01-05 21:35:04.025 186022 DEBUG nova.compute.manager [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:35:04 compute-0 nova_compute[186018]: 2026-01-05 21:35:04.097 186022 INFO nova.compute.manager [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Took 6.19 seconds to build instance.
Jan 05 21:35:04 compute-0 nova_compute[186018]: 2026-01-05 21:35:04.111 186022 DEBUG oslo_concurrency.lockutils [None req-9c878b7f-17cc-473d-8041-0d45cb9dff59 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "74ea9feb-891e-457f-9b12-7cd606300eb0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.275s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:35:04 compute-0 nova_compute[186018]: 2026-01-05 21:35:04.403 186022 DEBUG nova.network.neutron [req-68cff96c-a042-4b4f-a7a5-bb4e2d8406ea req-7ce87a39-1de2-466f-9173-1893e34ba35a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Updated VIF entry in instance network info cache for port d22fe4de-12eb-4fe6-9885-e160892739a4. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:35:04 compute-0 nova_compute[186018]: 2026-01-05 21:35:04.404 186022 DEBUG nova.network.neutron [req-68cff96c-a042-4b4f-a7a5-bb4e2d8406ea req-7ce87a39-1de2-466f-9173-1893e34ba35a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Updating instance_info_cache with network_info: [{"id": "d22fe4de-12eb-4fe6-9885-e160892739a4", "address": "fa:16:3e:52:08:85", "network": {"id": "89881152-7c99-468f-be06-08b9052e078d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2002553524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5e82df0d09c6419691e0e609dd7250ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd22fe4de-12", "ovs_interfaceid": "d22fe4de-12eb-4fe6-9885-e160892739a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:35:04 compute-0 nova_compute[186018]: 2026-01-05 21:35:04.426 186022 DEBUG oslo_concurrency.lockutils [req-68cff96c-a042-4b4f-a7a5-bb4e2d8406ea req-7ce87a39-1de2-466f-9173-1893e34ba35a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-74ea9feb-891e-457f-9b12-7cd606300eb0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:35:05 compute-0 nova_compute[186018]: 2026-01-05 21:35:05.984 186022 DEBUG nova.compute.manager [req-1666e1c6-e041-4e96-8b7d-7d3bf581830e req-91d17abf-fee9-4b49-9d40-17d42af3311a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Received event network-vif-plugged-d22fe4de-12eb-4fe6-9885-e160892739a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:35:05 compute-0 nova_compute[186018]: 2026-01-05 21:35:05.984 186022 DEBUG oslo_concurrency.lockutils [req-1666e1c6-e041-4e96-8b7d-7d3bf581830e req-91d17abf-fee9-4b49-9d40-17d42af3311a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "74ea9feb-891e-457f-9b12-7cd606300eb0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:35:05 compute-0 nova_compute[186018]: 2026-01-05 21:35:05.985 186022 DEBUG oslo_concurrency.lockutils [req-1666e1c6-e041-4e96-8b7d-7d3bf581830e req-91d17abf-fee9-4b49-9d40-17d42af3311a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "74ea9feb-891e-457f-9b12-7cd606300eb0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:35:05 compute-0 nova_compute[186018]: 2026-01-05 21:35:05.985 186022 DEBUG oslo_concurrency.lockutils [req-1666e1c6-e041-4e96-8b7d-7d3bf581830e req-91d17abf-fee9-4b49-9d40-17d42af3311a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "74ea9feb-891e-457f-9b12-7cd606300eb0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:35:05 compute-0 nova_compute[186018]: 2026-01-05 21:35:05.985 186022 DEBUG nova.compute.manager [req-1666e1c6-e041-4e96-8b7d-7d3bf581830e req-91d17abf-fee9-4b49-9d40-17d42af3311a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] No waiting events found dispatching network-vif-plugged-d22fe4de-12eb-4fe6-9885-e160892739a4 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:35:05 compute-0 nova_compute[186018]: 2026-01-05 21:35:05.986 186022 WARNING nova.compute.manager [req-1666e1c6-e041-4e96-8b7d-7d3bf581830e req-91d17abf-fee9-4b49-9d40-17d42af3311a 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Received unexpected event network-vif-plugged-d22fe4de-12eb-4fe6-9885-e160892739a4 for instance with vm_state active and task_state None.
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.095 186022 DEBUG oslo_concurrency.lockutils [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Acquiring lock "74ea9feb-891e-457f-9b12-7cd606300eb0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.096 186022 DEBUG oslo_concurrency.lockutils [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "74ea9feb-891e-457f-9b12-7cd606300eb0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.097 186022 DEBUG oslo_concurrency.lockutils [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Acquiring lock "74ea9feb-891e-457f-9b12-7cd606300eb0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.098 186022 DEBUG oslo_concurrency.lockutils [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "74ea9feb-891e-457f-9b12-7cd606300eb0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.098 186022 DEBUG oslo_concurrency.lockutils [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "74ea9feb-891e-457f-9b12-7cd606300eb0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.101 186022 INFO nova.compute.manager [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Terminating instance
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.103 186022 DEBUG nova.compute.manager [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 21:35:06 compute-0 kernel: tapd22fe4de-12 (unregistering): left promiscuous mode
Jan 05 21:35:06 compute-0 NetworkManager[56598]: <info>  [1767648906.1525] device (tapd22fe4de-12): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.154 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:06 compute-0 ovn_controller[98229]: 2026-01-05T21:35:06Z|00153|binding|INFO|Releasing lport d22fe4de-12eb-4fe6-9885-e160892739a4 from this chassis (sb_readonly=0)
Jan 05 21:35:06 compute-0 ovn_controller[98229]: 2026-01-05T21:35:06Z|00154|binding|INFO|Setting lport d22fe4de-12eb-4fe6-9885-e160892739a4 down in Southbound
Jan 05 21:35:06 compute-0 ovn_controller[98229]: 2026-01-05T21:35:06Z|00155|binding|INFO|Removing iface tapd22fe4de-12 ovn-installed in OVS
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.157 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:06.163 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:52:08:85 10.100.0.12'], port_security=['fa:16:3e:52:08:85 10.100.0.12'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.12/28', 'neutron:device_id': '74ea9feb-891e-457f-9b12-7cd606300eb0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-89881152-7c99-468f-be06-08b9052e078d', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '5e82df0d09c6419691e0e609dd7250ec', 'neutron:revision_number': '4', 'neutron:security_group_ids': '48baaba1-4fe0-4d4a-9a74-269f5f3eff54', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=c4a15241-2197-4a15-9488-3ad83ccf88ed, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=d22fe4de-12eb-4fe6-9885-e160892739a4) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:35:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:06.166 107689 INFO neutron.agent.ovn.metadata.agent [-] Port d22fe4de-12eb-4fe6-9885-e160892739a4 in datapath 89881152-7c99-468f-be06-08b9052e078d unbound from our chassis
Jan 05 21:35:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:06.169 107689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network 89881152-7c99-468f-be06-08b9052e078d, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 05 21:35:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:06.171 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[d0fc2e38-e566-4def-a0e5-488a49da7892]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:06.173 107689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-89881152-7c99-468f-be06-08b9052e078d namespace which is not needed anymore
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.179 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:06 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Deactivated successfully.
Jan 05 21:35:06 compute-0 systemd[1]: machine-qemu\x2d13\x2dinstance\x2d0000000c.scope: Consumed 2.825s CPU time.
Jan 05 21:35:06 compute-0 systemd-machined[157312]: Machine qemu-13-instance-0000000c terminated.
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.333 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.342 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:06 compute-0 neutron-haproxy-ovnmeta-89881152-7c99-468f-be06-08b9052e078d[253929]: [NOTICE]   (253933) : haproxy version is 2.8.14-c23fe91
Jan 05 21:35:06 compute-0 neutron-haproxy-ovnmeta-89881152-7c99-468f-be06-08b9052e078d[253929]: [NOTICE]   (253933) : path to executable is /usr/sbin/haproxy
Jan 05 21:35:06 compute-0 neutron-haproxy-ovnmeta-89881152-7c99-468f-be06-08b9052e078d[253929]: [WARNING]  (253933) : Exiting Master process...
Jan 05 21:35:06 compute-0 neutron-haproxy-ovnmeta-89881152-7c99-468f-be06-08b9052e078d[253929]: [ALERT]    (253933) : Current worker (253935) exited with code 143 (Terminated)
Jan 05 21:35:06 compute-0 neutron-haproxy-ovnmeta-89881152-7c99-468f-be06-08b9052e078d[253929]: [WARNING]  (253933) : All workers exited. Exiting... (0)
Jan 05 21:35:06 compute-0 systemd[1]: libpod-4096b2f9b5e6c7210e982f808f697be7676a3ab92d7d456a6ffba4fa7be511ac.scope: Deactivated successfully.
Jan 05 21:35:06 compute-0 podman[253969]: 2026-01-05 21:35:06.373769608 +0000 UTC m=+0.073794813 container died 4096b2f9b5e6c7210e982f808f697be7676a3ab92d7d456a6ffba4fa7be511ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-89881152-7c99-468f-be06-08b9052e078d, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.376 186022 INFO nova.virt.libvirt.driver [-] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Instance destroyed successfully.
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.377 186022 DEBUG nova.objects.instance [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lazy-loading 'resources' on Instance uuid 74ea9feb-891e-457f-9b12-7cd606300eb0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.390 186022 DEBUG nova.virt.libvirt.vif [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:34:56Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description='tempest-ServerAddressesTestJSON-server-595545070',display_name='tempest-ServerAddressesTestJSON-server-595545070',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='tempest-serveraddressestestjson-server-595545070',id=12,image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-05T21:35:04Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='5e82df0d09c6419691e0e609dd7250ec',ramdisk_id='',reservation_id='r-cd0q8sp2',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='ebb2027f-05a6-465a-af75-b7da40a91332',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_rng_model='virtio',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-ServerAddressesTestJSON-944728323',owner_user_name='tempest-ServerAddressesTestJSON-944728323-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T21:35:04Z,user_data=None,user_id='519a606c2c0e4a39af7e481bfbbd000f',uuid=74ea9feb-891e-457f-9b12-7cd606300eb0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d22fe4de-12eb-4fe6-9885-e160892739a4", "address": "fa:16:3e:52:08:85", "network": {"id": "89881152-7c99-468f-be06-08b9052e078d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2002553524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5e82df0d09c6419691e0e609dd7250ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd22fe4de-12", "ovs_interfaceid": "d22fe4de-12eb-4fe6-9885-e160892739a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.391 186022 DEBUG nova.network.os_vif_util [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Converting VIF {"id": "d22fe4de-12eb-4fe6-9885-e160892739a4", "address": "fa:16:3e:52:08:85", "network": {"id": "89881152-7c99-468f-be06-08b9052e078d", "bridge": "br-int", "label": "tempest-ServerAddressesTestJSON-2002553524-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.12", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "5e82df0d09c6419691e0e609dd7250ec", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd22fe4de-12", "ovs_interfaceid": "d22fe4de-12eb-4fe6-9885-e160892739a4", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.392 186022 DEBUG nova.network.os_vif_util [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:52:08:85,bridge_name='br-int',has_traffic_filtering=True,id=d22fe4de-12eb-4fe6-9885-e160892739a4,network=Network(89881152-7c99-468f-be06-08b9052e078d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd22fe4de-12') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.392 186022 DEBUG os_vif [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:08:85,bridge_name='br-int',has_traffic_filtering=True,id=d22fe4de-12eb-4fe6-9885-e160892739a4,network=Network(89881152-7c99-468f-be06-08b9052e078d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd22fe4de-12') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.394 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.395 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd22fe4de-12, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.399 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.402 186022 INFO os_vif [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:52:08:85,bridge_name='br-int',has_traffic_filtering=True,id=d22fe4de-12eb-4fe6-9885-e160892739a4,network=Network(89881152-7c99-468f-be06-08b9052e078d),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd22fe4de-12')
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.402 186022 INFO nova.virt.libvirt.driver [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Deleting instance files /var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0_del
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.403 186022 INFO nova.virt.libvirt.driver [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Deletion of /var/lib/nova/instances/74ea9feb-891e-457f-9b12-7cd606300eb0_del complete
Jan 05 21:35:06 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-4096b2f9b5e6c7210e982f808f697be7676a3ab92d7d456a6ffba4fa7be511ac-userdata-shm.mount: Deactivated successfully.
Jan 05 21:35:06 compute-0 systemd[1]: var-lib-containers-storage-overlay-11342d4777d80a838b1f637d381b43206f89207ddee44a072f6c35294c209eed-merged.mount: Deactivated successfully.
Jan 05 21:35:06 compute-0 podman[253969]: 2026-01-05 21:35:06.427166654 +0000 UTC m=+0.127191859 container cleanup 4096b2f9b5e6c7210e982f808f697be7676a3ab92d7d456a6ffba4fa7be511ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-89881152-7c99-468f-be06-08b9052e078d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202)
Jan 05 21:35:06 compute-0 systemd[1]: libpod-conmon-4096b2f9b5e6c7210e982f808f697be7676a3ab92d7d456a6ffba4fa7be511ac.scope: Deactivated successfully.
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.477 186022 INFO nova.compute.manager [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Took 0.37 seconds to destroy the instance on the hypervisor.
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.478 186022 DEBUG oslo.service.loopingcall [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.478 186022 DEBUG nova.compute.manager [-] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.478 186022 DEBUG nova.network.neutron [-] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 21:35:06 compute-0 podman[254013]: 2026-01-05 21:35:06.528335378 +0000 UTC m=+0.072689745 container remove 4096b2f9b5e6c7210e982f808f697be7676a3ab92d7d456a6ffba4fa7be511ac (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-89881152-7c99-468f-be06-08b9052e078d, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 05 21:35:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:06.542 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[2334c275-3b5e-4d9d-8ff1-ac349434a4f0]: (4, ('Mon Jan  5 09:35:06 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-89881152-7c99-468f-be06-08b9052e078d (4096b2f9b5e6c7210e982f808f697be7676a3ab92d7d456a6ffba4fa7be511ac)\n4096b2f9b5e6c7210e982f808f697be7676a3ab92d7d456a6ffba4fa7be511ac\nMon Jan  5 09:35:06 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-89881152-7c99-468f-be06-08b9052e078d (4096b2f9b5e6c7210e982f808f697be7676a3ab92d7d456a6ffba4fa7be511ac)\n4096b2f9b5e6c7210e982f808f697be7676a3ab92d7d456a6ffba4fa7be511ac\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:06.545 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[58f3a8b5-1787-44a6-98cd-6ec899d52d43]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:06.548 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap89881152-70, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.552 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:06 compute-0 kernel: tap89881152-70: left promiscuous mode
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.571 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:06.573 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[34534d70-80ea-4315-83fe-061a5d16bdf6]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:06.592 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[4fa08bc1-59bc-4e19-b99c-48976d2d43be]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:06.595 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[82e60d18-c7be-4cc2-a93c-359f23a34b84]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:06.623 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[d242b337-4a2e-4402-9881-9499c8268634]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 564367, 'reachable_time': 20861, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 254027, 'error': None, 'target': 'ovnmeta-89881152-7c99-468f-be06-08b9052e078d', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:06 compute-0 systemd[1]: run-netns-ovnmeta\x2d89881152\x2d7c99\x2d468f\x2dbe06\x2d08b9052e078d.mount: Deactivated successfully.
Jan 05 21:35:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:06.639 108136 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-89881152-7c99-468f-be06-08b9052e078d deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 05 21:35:06 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:06.639 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[1e63dce7-3a13-40cb-9a94-602a33c937f2]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.644 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:06 compute-0 nova_compute[186018]: 2026-01-05 21:35:06.976 186022 DEBUG nova.network.neutron [-] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:35:07 compute-0 nova_compute[186018]: 2026-01-05 21:35:07.000 186022 INFO nova.compute.manager [-] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Took 0.52 seconds to deallocate network for instance.
Jan 05 21:35:07 compute-0 nova_compute[186018]: 2026-01-05 21:35:07.044 186022 DEBUG oslo_concurrency.lockutils [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:35:07 compute-0 nova_compute[186018]: 2026-01-05 21:35:07.045 186022 DEBUG oslo_concurrency.lockutils [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:35:07 compute-0 nova_compute[186018]: 2026-01-05 21:35:07.047 186022 DEBUG nova.compute.manager [req-2af0978e-4312-4cd2-86e8-5564db5305ab req-b1cd99f5-9870-49ae-8889-1043b70c92b3 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Received event network-vif-deleted-d22fe4de-12eb-4fe6-9885-e160892739a4 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:35:07 compute-0 nova_compute[186018]: 2026-01-05 21:35:07.146 186022 DEBUG nova.compute.provider_tree [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:35:07 compute-0 nova_compute[186018]: 2026-01-05 21:35:07.171 186022 DEBUG nova.scheduler.client.report [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:35:07 compute-0 nova_compute[186018]: 2026-01-05 21:35:07.192 186022 DEBUG oslo_concurrency.lockutils [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.147s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:35:07 compute-0 nova_compute[186018]: 2026-01-05 21:35:07.222 186022 INFO nova.scheduler.client.report [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Deleted allocations for instance 74ea9feb-891e-457f-9b12-7cd606300eb0
Jan 05 21:35:07 compute-0 nova_compute[186018]: 2026-01-05 21:35:07.293 186022 DEBUG oslo_concurrency.lockutils [None req-ed250799-4d1d-426f-aa26-34cbd46bbb57 519a606c2c0e4a39af7e481bfbbd000f 5e82df0d09c6419691e0e609dd7250ec - - default default] Lock "74ea9feb-891e-457f-9b12-7cd606300eb0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.197s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.789 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.790 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.800 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '62f57876-af2d-4771-bffd-c87b7755cc5c', 'name': 'tempest-AttachInterfacesUnderV243Test-server-306597775', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e0899289c7dd4631b4fa69150a914123', 'user_id': '168ad639a6ed41c8bd954c434807ef6c', 'hostId': 'c3f8712f401137fbbdc6483d36c041bcfcf3dfa8c8dce0a58aba2f1b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.805 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.806 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance fe15eddf-ceea-4584-95df-dc1ea54e3c25 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.808 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/fe15eddf-ceea-4584-95df-dc1ea54e3c25 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f276ecb8e60cef1797549a0d2bcc21ef3546f9ad65f5da0e31c0a93bf2cbb910" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.809 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.812 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.813 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.814 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.814 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.815 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.815 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.816 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:07.817 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:35:07 compute-0 nova_compute[186018]: 2026-01-05 21:35:07.985 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.564 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Mon, 05 Jan 2026 21:35:07 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-50f55c23-8984-42ce-8564-67fd34d71694 x-openstack-request-id: req-50f55c23-8984-42ce-8564-67fd34d71694 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.565 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "fe15eddf-ceea-4584-95df-dc1ea54e3c25", "name": "te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy", "status": "ACTIVE", "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "user_id": "4adc8921daaf44d4b88d43bd5764da44", "metadata": {"metering.server_group": "592ac083-4e5e-4ede-94dc-941b228764d4"}, "hostId": "3ca26c7ed0445332f9f9d5b660e6197db7ba063b9bde1e989d152df8", "image": {"id": "be6cfe06-61ed-4c76-8e1d-bc9df6929005", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/be6cfe06-61ed-4c76-8e1d-bc9df6929005"}]}, "flavor": {"id": "ce1138a2-4b82-4664-8860-711a956c0882", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/ce1138a2-4b82-4664-8860-711a956c0882"}]}, "created": "2026-01-05T21:33:32Z", "updated": "2026-01-05T21:33:42Z", "addresses": {"": [{"version": 4, "addr": "10.100.0.203", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:f6:00:12"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/fe15eddf-ceea-4584-95df-dc1ea54e3c25"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/fe15eddf-ceea-4584-95df-dc1ea54e3c25"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-05T21:33:41.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000b", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.565 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/fe15eddf-ceea-4584-95df-dc1ea54e3c25 used request id req-50f55c23-8984-42ce-8564-67fd34d71694 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.567 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fe15eddf-ceea-4584-95df-dc1ea54e3c25', 'name': 'te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'be6cfe06-61ed-4c76-8e1d-bc9df6929005'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '0d77496083304392a3bddf3b3cc09d6f', 'user_id': '4adc8921daaf44d4b88d43bd5764da44', 'hostId': '3ca26c7ed0445332f9f9d5b660e6197db7ba063b9bde1e989d152df8', 'status': 'active', 'metadata': {'metering.server_group': '592ac083-4e5e-4ede-94dc-941b228764d4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.567 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.567 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.568 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.568 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.569 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.569 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.570 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.570 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.570 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.571 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:35:08.568162) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.571 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:35:08.571154) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.576 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.581 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for fe15eddf-ceea-4584-95df-dc1ea54e3c25 / tapd05ce4e7-0f inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.581 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.582 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.582 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.582 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.583 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.583 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.583 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.583 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:35:08.583454) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.583 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.584 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.585 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.585 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.585 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.586 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.586 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.586 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.586 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.588 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.588 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:35:08.586583) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.589 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.589 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.589 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.589 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.589 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.590 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:35:08.589906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.589 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.590 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.591 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.591 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.591 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.591 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.592 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:35:08.592020) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.592 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.592 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.592 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.593 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.593 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.594 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.594 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.594 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.594 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.594 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.595 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.595 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:35:08.594654) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.596 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.596 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.596 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.597 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.597 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.597 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.598 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-05T21:35:08.597897) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.598 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.598 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy>]
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.599 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.599 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.599 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.599 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.600 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:35:08.599906) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.599 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.600 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.600 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.601 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.601 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.601 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.602 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.602 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.602 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:35:08.602345) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.602 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.602 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.603 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.604 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.604 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.604 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.604 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.604 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.605 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.606 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:35:08.605078) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.623 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.623 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.640 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.640 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.641 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.641 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.642 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.642 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.642 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.642 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.642 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.643 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.643 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.644 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.644 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.644 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:35:08.642472) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.644 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.645 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.645 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.645 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.645 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-05T21:35:08.645193) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.646 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy>]
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.646 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.646 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.646 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.647 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.647 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.647 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.647 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.648 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.648 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.649 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.649 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.649 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:35:08.647130) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.649 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.650 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.650 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:35:08.650092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.696 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/memory.usage volume: 42.72265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.733 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/memory.usage volume: 43.203125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.734 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.734 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.734 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.734 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.734 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.734 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.734 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.735 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.735 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.735 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.736 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.736 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.736 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.736 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.736 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:35:08.734738) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.736 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.737 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.737 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.737 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.738 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.738 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.738 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.738 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.739 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:35:08.736676) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.739 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.739 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.739 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:35:08.739360) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 podman[254029]: 2026-01-05 21:35:08.776425405 +0000 UTC m=+0.119539598 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, name=ubi9, architecture=x86_64, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=, com.redhat.component=ubi9-container, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543, distribution-scope=public, maintainer=Red Hat, Inc., vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9.)
Jan 05 21:35:08 compute-0 podman[254030]: 2026-01-05 21:35:08.781093278 +0000 UTC m=+0.115067050 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202)
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.795 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 31029760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.796 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.838 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.bytes volume: 29568000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.839 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.840 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.840 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.840 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.841 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.841 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.841 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes volume: 4311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.841 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:35:08.841290) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.842 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.843 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.843 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.843 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.843 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.844 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.844 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 519177861 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.844 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 51692234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.845 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.latency volume: 575714939 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.845 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.latency volume: 64092754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.846 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.846 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.846 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.847 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:35:08.844092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.847 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.847 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.847 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.847 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 1138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.848 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.848 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.requests volume: 1061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.848 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.849 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.850 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.850 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:35:08.847615) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.850 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.850 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.850 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.851 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.851 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.852 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.852 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.852 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.853 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.853 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:35:08.850699) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.853 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.853 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.853 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.854 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 73068544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.854 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:35:08.853761) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.854 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.854 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.bytes volume: 72814592 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.855 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.855 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.855 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.855 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.855 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.856 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.856 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/cpu volume: 36580000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.856 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:35:08.856024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.856 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/cpu volume: 84940000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.856 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.857 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.857 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.857 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.857 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.857 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 13557622904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.857 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.857 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:35:08.857320) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.858 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.latency volume: 3849764618 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.858 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.858 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.858 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.858 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.858 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.858 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.859 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.859 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.859 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:35:08.859023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.859 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.859 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.requests volume: 306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.859 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.860 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.860 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.861 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.862 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.863 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:35:08.863 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:35:11 compute-0 nova_compute[186018]: 2026-01-05 21:35:11.400 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:12 compute-0 ovn_controller[98229]: 2026-01-05T21:35:12Z|00156|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:35:12 compute-0 ovn_controller[98229]: 2026-01-05T21:35:12Z|00157|binding|INFO|Releasing lport 68b7e7cf-3a36-4106-85be-cc39d85ff653 from this chassis (sb_readonly=0)
Jan 05 21:35:12 compute-0 nova_compute[186018]: 2026-01-05 21:35:12.354 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:12 compute-0 nova_compute[186018]: 2026-01-05 21:35:12.381 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:12 compute-0 podman[254070]: 2026-01-05 21:35:12.76056267 +0000 UTC m=+0.095229039 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224)
Jan 05 21:35:12 compute-0 nova_compute[186018]: 2026-01-05 21:35:12.988 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:16 compute-0 nova_compute[186018]: 2026-01-05 21:35:16.403 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:16 compute-0 nova_compute[186018]: 2026-01-05 21:35:16.822 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:17 compute-0 nova_compute[186018]: 2026-01-05 21:35:17.991 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:21 compute-0 nova_compute[186018]: 2026-01-05 21:35:21.365 186022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767648906.3638911, 74ea9feb-891e-457f-9b12-7cd606300eb0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:35:21 compute-0 nova_compute[186018]: 2026-01-05 21:35:21.366 186022 INFO nova.compute.manager [-] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] VM Stopped (Lifecycle Event)
Jan 05 21:35:21 compute-0 nova_compute[186018]: 2026-01-05 21:35:21.386 186022 DEBUG nova.compute.manager [None req-fb6ac58c-e04e-46e8-ab62-fe4c2b51ff97 - - - - - -] [instance: 74ea9feb-891e-457f-9b12-7cd606300eb0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:35:21 compute-0 nova_compute[186018]: 2026-01-05 21:35:21.406 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:21 compute-0 podman[254100]: 2026-01-05 21:35:21.791111296 +0000 UTC m=+0.123639526 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., distribution-scope=public, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, release=1755695350, version=9.6, com.redhat.component=ubi9-minimal-container, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, name=ubi9-minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, io.openshift.tags=minimal rhel9, vcs-type=git)
Jan 05 21:35:21 compute-0 podman[254099]: 2026-01-05 21:35:21.844862442 +0000 UTC m=+0.184897590 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 05 21:35:22 compute-0 nova_compute[186018]: 2026-01-05 21:35:22.995 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:23 compute-0 ovn_controller[98229]: 2026-01-05T21:35:23Z|00158|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:35:23 compute-0 ovn_controller[98229]: 2026-01-05T21:35:23Z|00159|binding|INFO|Releasing lport 68b7e7cf-3a36-4106-85be-cc39d85ff653 from this chassis (sb_readonly=0)
Jan 05 21:35:23 compute-0 nova_compute[186018]: 2026-01-05 21:35:23.234 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:25 compute-0 podman[254146]: 2026-01-05 21:35:25.777069848 +0000 UTC m=+0.108519158 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, managed_by=edpm_ansible, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202)
Jan 05 21:35:25 compute-0 podman[254147]: 2026-01-05 21:35:25.786970658 +0000 UTC m=+0.110634333 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:35:26 compute-0 nova_compute[186018]: 2026-01-05 21:35:26.410 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:27 compute-0 nova_compute[186018]: 2026-01-05 21:35:27.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:35:27 compute-0 nova_compute[186018]: 2026-01-05 21:35:27.460 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:35:27 compute-0 nova_compute[186018]: 2026-01-05 21:35:27.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:35:27 compute-0 nova_compute[186018]: 2026-01-05 21:35:27.673 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:35:27 compute-0 nova_compute[186018]: 2026-01-05 21:35:27.674 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:35:27 compute-0 nova_compute[186018]: 2026-01-05 21:35:27.674 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:35:27 compute-0 nova_compute[186018]: 2026-01-05 21:35:27.675 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62f57876-af2d-4771-bffd-c87b7755cc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:35:27 compute-0 nova_compute[186018]: 2026-01-05 21:35:27.997 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:28 compute-0 ovn_controller[98229]: 2026-01-05T21:35:28Z|00160|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:35:28 compute-0 ovn_controller[98229]: 2026-01-05T21:35:28Z|00161|binding|INFO|Releasing lport 68b7e7cf-3a36-4106-85be-cc39d85ff653 from this chassis (sb_readonly=0)
Jan 05 21:35:28 compute-0 nova_compute[186018]: 2026-01-05 21:35:28.410 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:29 compute-0 nova_compute[186018]: 2026-01-05 21:35:29.470 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updating instance_info_cache with network_info: [{"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:35:29 compute-0 nova_compute[186018]: 2026-01-05 21:35:29.489 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:35:29 compute-0 nova_compute[186018]: 2026-01-05 21:35:29.490 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:35:29 compute-0 nova_compute[186018]: 2026-01-05 21:35:29.490 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:35:29 compute-0 nova_compute[186018]: 2026-01-05 21:35:29.491 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:35:29 compute-0 nova_compute[186018]: 2026-01-05 21:35:29.491 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:35:29 compute-0 podman[202426]: time="2026-01-05T21:35:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:35:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:35:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:35:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:35:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4840 "" "Go-http-client/1.1"
Jan 05 21:35:31 compute-0 nova_compute[186018]: 2026-01-05 21:35:31.414 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:31 compute-0 openstack_network_exporter[205720]: ERROR   21:35:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:35:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:35:31 compute-0 openstack_network_exporter[205720]: ERROR   21:35:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:35:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:35:31 compute-0 nova_compute[186018]: 2026-01-05 21:35:31.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:35:31 compute-0 nova_compute[186018]: 2026-01-05 21:35:31.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:35:31 compute-0 nova_compute[186018]: 2026-01-05 21:35:31.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:35:31 compute-0 nova_compute[186018]: 2026-01-05 21:35:31.496 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:35:31 compute-0 nova_compute[186018]: 2026-01-05 21:35:31.496 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:35:31 compute-0 nova_compute[186018]: 2026-01-05 21:35:31.497 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:35:31 compute-0 nova_compute[186018]: 2026-01-05 21:35:31.498 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:35:31 compute-0 nova_compute[186018]: 2026-01-05 21:35:31.598 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:35:31 compute-0 nova_compute[186018]: 2026-01-05 21:35:31.684 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.086s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:35:31 compute-0 nova_compute[186018]: 2026-01-05 21:35:31.686 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:35:31 compute-0 nova_compute[186018]: 2026-01-05 21:35:31.761 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:35:31 compute-0 nova_compute[186018]: 2026-01-05 21:35:31.770 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:35:31 compute-0 nova_compute[186018]: 2026-01-05 21:35:31.862 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.091s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:35:31 compute-0 nova_compute[186018]: 2026-01-05 21:35:31.863 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:35:31 compute-0 nova_compute[186018]: 2026-01-05 21:35:31.920 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:35:32 compute-0 nova_compute[186018]: 2026-01-05 21:35:32.344 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:35:32 compute-0 nova_compute[186018]: 2026-01-05 21:35:32.345 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4915MB free_disk=72.28668975830078GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:35:32 compute-0 nova_compute[186018]: 2026-01-05 21:35:32.346 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:35:32 compute-0 nova_compute[186018]: 2026-01-05 21:35:32.346 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:35:32 compute-0 nova_compute[186018]: 2026-01-05 21:35:32.460 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:35:32 compute-0 nova_compute[186018]: 2026-01-05 21:35:32.461 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance fe15eddf-ceea-4584-95df-dc1ea54e3c25 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:35:32 compute-0 nova_compute[186018]: 2026-01-05 21:35:32.461 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:35:32 compute-0 nova_compute[186018]: 2026-01-05 21:35:32.461 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:35:32 compute-0 nova_compute[186018]: 2026-01-05 21:35:32.513 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:35:32 compute-0 nova_compute[186018]: 2026-01-05 21:35:32.531 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:35:32 compute-0 nova_compute[186018]: 2026-01-05 21:35:32.558 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:35:32 compute-0 nova_compute[186018]: 2026-01-05 21:35:32.559 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.213s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:35:32 compute-0 podman[254199]: 2026-01-05 21:35:32.755327431 +0000 UTC m=+0.095566626 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 21:35:32 compute-0 nova_compute[186018]: 2026-01-05 21:35:32.999 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:34 compute-0 nova_compute[186018]: 2026-01-05 21:35:34.558 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:35:35 compute-0 nova_compute[186018]: 2026-01-05 21:35:35.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:35:36 compute-0 nova_compute[186018]: 2026-01-05 21:35:36.417 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:36 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:36.579 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=16, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=15) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:35:36 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:36.581 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 2 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:35:36 compute-0 nova_compute[186018]: 2026-01-05 21:35:36.582 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:37 compute-0 nova_compute[186018]: 2026-01-05 21:35:37.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:35:38 compute-0 nova_compute[186018]: 2026-01-05 21:35:38.002 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:38 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:38.583 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '16'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:35:39 compute-0 podman[254222]: 2026-01-05 21:35:39.771912865 +0000 UTC m=+0.108512638 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, distribution-scope=public, vendor=Red Hat, Inc., container_name=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, version=9.4, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, com.redhat.component=ubi9-container, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1214.1726694543, architecture=x86_64, name=ubi9)
Jan 05 21:35:39 compute-0 podman[254223]: 2026-01-05 21:35:39.7895733 +0000 UTC m=+0.118189963 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:35:41 compute-0 nova_compute[186018]: 2026-01-05 21:35:41.422 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:42.874 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:35:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:42.874 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:35:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:35:42.875 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:35:43 compute-0 nova_compute[186018]: 2026-01-05 21:35:43.004 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:43 compute-0 podman[254258]: 2026-01-05 21:35:43.717583615 +0000 UTC m=+0.068813502 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 21:35:46 compute-0 nova_compute[186018]: 2026-01-05 21:35:46.425 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:48 compute-0 nova_compute[186018]: 2026-01-05 21:35:48.008 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:51 compute-0 nova_compute[186018]: 2026-01-05 21:35:51.429 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._run_image_cache_manager_pass run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.461 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.462 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.463 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.register_storage_use.<locals>.do_register_storage_use" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.463 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "storage-registry-lock" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.464 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "storage-registry-lock" acquired by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.464 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "storage-registry-lock" "released" by "nova.virt.storage_users.get_storage_users.<locals>.do_get_storage_users" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.498 186022 DEBUG nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Adding ephemeral_1_0706d66 into backend ephemeral images _store_ephemeral_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:100
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.517 186022 DEBUG nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Verify base images _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:314
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.517 186022 DEBUG nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Image id be6cfe06-61ed-4c76-8e1d-bc9df6929005 yields fingerprint 6132ba58e89e5b8de27dca23fb9f4769d454fe9f _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.518 186022 INFO nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] image be6cfe06-61ed-4c76-8e1d-bc9df6929005 at (/var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f): checking
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.518 186022 DEBUG nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] image be6cfe06-61ed-4c76-8e1d-bc9df6929005 at (/var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.521 186022 DEBUG nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Image id  yields fingerprint da39a3ee5e6b4b0d3255bfef95601890afd80709 _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.522 186022 DEBUG nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Image id ebb2027f-05a6-465a-af75-b7da40a91332 yields fingerprint 3af50d8a112e7e4ff38bfa89796d95124b9e14fe _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:319
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.522 186022 INFO nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] image ebb2027f-05a6-465a-af75-b7da40a91332 at (/var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe): checking
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.523 186022 DEBUG nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] image ebb2027f-05a6-465a-af75-b7da40a91332 at (/var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe): image is in use _mark_in_use /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:279
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.524 186022 DEBUG nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] 62f57876-af2d-4771-bffd-c87b7755cc5c is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.525 186022 DEBUG nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] 62f57876-af2d-4771-bffd-c87b7755cc5c has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.525 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.626 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.627 186022 DEBUG nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c is backed by 3af50d8a112e7e4ff38bfa89796d95124b9e14fe _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.628 186022 DEBUG nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] fe15eddf-ceea-4584-95df-dc1ea54e3c25 is a valid instance name _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:126
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.629 186022 DEBUG nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] fe15eddf-ceea-4584-95df-dc1ea54e3c25 has a disk file _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:129
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.630 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.737 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.107s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.739 186022 DEBUG nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance fe15eddf-ceea-4584-95df-dc1ea54e3c25 is backed by 6132ba58e89e5b8de27dca23fb9f4769d454fe9f _list_backing_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:141
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.740 186022 WARNING nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.741 186022 WARNING nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Unknown base file: /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.741 186022 INFO nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Active base files: /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f /var/lib/nova/instances/_base/3af50d8a112e7e4ff38bfa89796d95124b9e14fe
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.742 186022 INFO nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Removable base files: /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.743 186022 INFO nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/d089b1afe312c5a0e92fab4cd45cbd6f2c5805ec
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.744 186022 INFO nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/4b3cb6d77cb774829604f60b9397307587f6e640
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.745 186022 DEBUG nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Verification complete _age_and_verify_cached_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:350
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.746 186022 DEBUG nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Verify swap images _age_and_verify_swap_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:299
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.746 186022 DEBUG nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Verify ephemeral images _age_and_verify_ephemeral_images /usr/lib/python3.9/site-packages/nova/virt/libvirt/imagecache.py:284
Jan 05 21:35:52 compute-0 nova_compute[186018]: 2026-01-05 21:35:52.747 186022 INFO nova.virt.libvirt.imagecache [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Base, swap or ephemeral file too young to remove: /var/lib/nova/instances/_base/ephemeral_1_0706d66
Jan 05 21:35:52 compute-0 podman[254284]: 2026-01-05 21:35:52.770166102 +0000 UTC m=+0.100264531 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., container_name=openstack_network_exporter, io.buildah.version=1.33.7, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.openshift.expose-services=, release=1755695350, version=9.6, name=ubi9-minimal, vcs-type=git, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b)
Jan 05 21:35:52 compute-0 podman[254282]: 2026-01-05 21:35:52.820651871 +0000 UTC m=+0.160901227 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:35:53 compute-0 nova_compute[186018]: 2026-01-05 21:35:53.012 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:56 compute-0 nova_compute[186018]: 2026-01-05 21:35:56.433 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:56 compute-0 podman[254329]: 2026-01-05 21:35:56.733662323 +0000 UTC m=+0.079240047 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 21:35:56 compute-0 podman[254328]: 2026-01-05 21:35:56.759667958 +0000 UTC m=+0.106649109 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 21:35:58 compute-0 nova_compute[186018]: 2026-01-05 21:35:58.014 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:35:59 compute-0 podman[202426]: time="2026-01-05T21:35:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:35:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:35:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:35:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:35:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4829 "" "Go-http-client/1.1"
Jan 05 21:36:01 compute-0 openstack_network_exporter[205720]: ERROR   21:36:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:36:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:36:01 compute-0 openstack_network_exporter[205720]: ERROR   21:36:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:36:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:36:01 compute-0 nova_compute[186018]: 2026-01-05 21:36:01.436 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:03 compute-0 nova_compute[186018]: 2026-01-05 21:36:03.017 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:03 compute-0 podman[254369]: 2026-01-05 21:36:03.76066144 +0000 UTC m=+0.104208365 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:36:06 compute-0 nova_compute[186018]: 2026-01-05 21:36:06.441 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:08 compute-0 nova_compute[186018]: 2026-01-05 21:36:08.022 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:10 compute-0 podman[254393]: 2026-01-05 21:36:10.739651652 +0000 UTC m=+0.091354016 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, config_id=kepler, managed_by=edpm_ansible, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, name=ubi9, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, container_name=kepler, release-0.7.12=, maintainer=Red Hat, Inc., version=9.4)
Jan 05 21:36:10 compute-0 podman[254394]: 2026-01-05 21:36:10.769537029 +0000 UTC m=+0.112435461 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:36:11 compute-0 nova_compute[186018]: 2026-01-05 21:36:11.446 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:13 compute-0 nova_compute[186018]: 2026-01-05 21:36:13.026 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:14 compute-0 podman[254429]: 2026-01-05 21:36:14.774757649 +0000 UTC m=+0.114058984 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_managed=true)
Jan 05 21:36:16 compute-0 nova_compute[186018]: 2026-01-05 21:36:16.450 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:18 compute-0 nova_compute[186018]: 2026-01-05 21:36:18.031 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:21 compute-0 nova_compute[186018]: 2026-01-05 21:36:21.456 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:23 compute-0 nova_compute[186018]: 2026-01-05 21:36:23.035 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:23 compute-0 podman[254448]: 2026-01-05 21:36:23.807924813 +0000 UTC m=+0.138853497 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=openstack_network_exporter, architecture=x86_64, maintainer=Red Hat, Inc., name=ubi9-minimal, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, version=9.6, com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 05 21:36:23 compute-0 podman[254447]: 2026-01-05 21:36:23.816497719 +0000 UTC m=+0.160826346 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:36:26 compute-0 nova_compute[186018]: 2026-01-05 21:36:26.460 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:27 compute-0 nova_compute[186018]: 2026-01-05 21:36:27.747 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:36:27 compute-0 nova_compute[186018]: 2026-01-05 21:36:27.748 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:36:27 compute-0 podman[254493]: 2026-01-05 21:36:27.767124001 +0000 UTC m=+0.101657358 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 05 21:36:27 compute-0 podman[254494]: 2026-01-05 21:36:27.787688622 +0000 UTC m=+0.122140846 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:36:28 compute-0 nova_compute[186018]: 2026-01-05 21:36:28.039 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:28 compute-0 nova_compute[186018]: 2026-01-05 21:36:28.600 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:36:28 compute-0 nova_compute[186018]: 2026-01-05 21:36:28.601 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:36:28 compute-0 nova_compute[186018]: 2026-01-05 21:36:28.601 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:36:29 compute-0 podman[202426]: time="2026-01-05T21:36:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:36:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:36:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:36:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:36:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4840 "" "Go-http-client/1.1"
Jan 05 21:36:31 compute-0 openstack_network_exporter[205720]: ERROR   21:36:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:36:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:36:31 compute-0 openstack_network_exporter[205720]: ERROR   21:36:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:36:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:36:31 compute-0 nova_compute[186018]: 2026-01-05 21:36:31.463 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:31 compute-0 nova_compute[186018]: 2026-01-05 21:36:31.907 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Updating instance_info_cache with network_info: [{"id": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "address": "fa:16:3e:f6:00:12", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd05ce4e7-0f", "ovs_interfaceid": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:36:31 compute-0 nova_compute[186018]: 2026-01-05 21:36:31.929 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:36:31 compute-0 nova_compute[186018]: 2026-01-05 21:36:31.929 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:36:31 compute-0 nova_compute[186018]: 2026-01-05 21:36:31.930 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:36:31 compute-0 nova_compute[186018]: 2026-01-05 21:36:31.930 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:36:31 compute-0 nova_compute[186018]: 2026-01-05 21:36:31.930 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:36:31 compute-0 nova_compute[186018]: 2026-01-05 21:36:31.931 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:36:31 compute-0 nova_compute[186018]: 2026-01-05 21:36:31.952 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:36:31 compute-0 nova_compute[186018]: 2026-01-05 21:36:31.952 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:36:31 compute-0 nova_compute[186018]: 2026-01-05 21:36:31.952 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:36:31 compute-0 nova_compute[186018]: 2026-01-05 21:36:31.953 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.025 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.121 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.123 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.186 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.201 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.262 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.264 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.326 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.715 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.716 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4925MB free_disk=72.28673553466797GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.716 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.717 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.797 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.798 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance fe15eddf-ceea-4584-95df-dc1ea54e3c25 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.798 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.798 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.869 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.886 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.887 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:36:32 compute-0 nova_compute[186018]: 2026-01-05 21:36:32.888 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:36:33 compute-0 nova_compute[186018]: 2026-01-05 21:36:33.042 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:34 compute-0 nova_compute[186018]: 2026-01-05 21:36:34.419 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:36:34 compute-0 nova_compute[186018]: 2026-01-05 21:36:34.419 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:36:34 compute-0 podman[254546]: 2026-01-05 21:36:34.790629576 +0000 UTC m=+0.138966160 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:36:35 compute-0 nova_compute[186018]: 2026-01-05 21:36:35.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:36:35 compute-0 nova_compute[186018]: 2026-01-05 21:36:35.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:36:35 compute-0 nova_compute[186018]: 2026-01-05 21:36:35.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:36:35 compute-0 nova_compute[186018]: 2026-01-05 21:36:35.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 21:36:36 compute-0 nova_compute[186018]: 2026-01-05 21:36:36.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:36:36 compute-0 nova_compute[186018]: 2026-01-05 21:36:36.465 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:37 compute-0 nova_compute[186018]: 2026-01-05 21:36:37.473 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:36:38 compute-0 nova_compute[186018]: 2026-01-05 21:36:38.045 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:38 compute-0 nova_compute[186018]: 2026-01-05 21:36:38.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:36:39 compute-0 nova_compute[186018]: 2026-01-05 21:36:39.037 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:36:39 compute-0 nova_compute[186018]: 2026-01-05 21:36:39.063 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Triggering sync for uuid 62f57876-af2d-4771-bffd-c87b7755cc5c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 05 21:36:39 compute-0 nova_compute[186018]: 2026-01-05 21:36:39.064 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Triggering sync for uuid fe15eddf-ceea-4584-95df-dc1ea54e3c25 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 05 21:36:39 compute-0 nova_compute[186018]: 2026-01-05 21:36:39.064 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "62f57876-af2d-4771-bffd-c87b7755cc5c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:36:39 compute-0 nova_compute[186018]: 2026-01-05 21:36:39.065 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "62f57876-af2d-4771-bffd-c87b7755cc5c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:36:39 compute-0 nova_compute[186018]: 2026-01-05 21:36:39.065 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:36:39 compute-0 nova_compute[186018]: 2026-01-05 21:36:39.065 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:36:39 compute-0 nova_compute[186018]: 2026-01-05 21:36:39.112 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "62f57876-af2d-4771-bffd-c87b7755cc5c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.047s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:36:39 compute-0 nova_compute[186018]: 2026-01-05 21:36:39.113 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.048s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:36:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:36:40.213 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=17, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=16) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:36:40 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:36:40.215 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 5 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:36:40 compute-0 nova_compute[186018]: 2026-01-05 21:36:40.217 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:41 compute-0 nova_compute[186018]: 2026-01-05 21:36:41.469 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:41 compute-0 podman[254570]: 2026-01-05 21:36:41.766934218 +0000 UTC m=+0.120766491 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, io.openshift.tags=base rhel9, io.openshift.expose-services=, vcs-type=git, version=9.4, build-date=2024-09-18T21:23:30, io.k8s.display-name=Red Hat Universal Base Image 9, distribution-scope=public, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, config_id=kepler, maintainer=Red Hat, Inc., com.redhat.component=ubi9-container, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']})
Jan 05 21:36:41 compute-0 podman[254571]: 2026-01-05 21:36:41.767444471 +0000 UTC m=+0.103906277 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS)
Jan 05 21:36:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:36:42.875 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:36:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:36:42.876 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:36:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:36:42.877 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:36:43 compute-0 nova_compute[186018]: 2026-01-05 21:36:43.047 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:43 compute-0 sshd-session[254607]: Connection closed by 172.105.102.10 port 51994
Jan 05 21:36:43 compute-0 nova_compute[186018]: 2026-01-05 21:36:43.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:36:43 compute-0 nova_compute[186018]: 2026-01-05 21:36:43.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 21:36:43 compute-0 nova_compute[186018]: 2026-01-05 21:36:43.491 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 05 21:36:45 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:36:45.218 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '17'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:36:45 compute-0 podman[254608]: 2026-01-05 21:36:45.794607898 +0000 UTC m=+0.127871997 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS)
Jan 05 21:36:46 compute-0 nova_compute[186018]: 2026-01-05 21:36:46.472 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:48 compute-0 nova_compute[186018]: 2026-01-05 21:36:48.050 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:51 compute-0 nova_compute[186018]: 2026-01-05 21:36:51.476 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:53 compute-0 nova_compute[186018]: 2026-01-05 21:36:53.052 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:54 compute-0 podman[254631]: 2026-01-05 21:36:54.752946904 +0000 UTC m=+0.093392850 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.expose-services=, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, container_name=openstack_network_exporter, vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., build-date=2025-08-20T13:12:41, release=1755695350, vendor=Red Hat, Inc.)
Jan 05 21:36:54 compute-0 podman[254630]: 2026-01-05 21:36:54.807279194 +0000 UTC m=+0.155228068 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 05 21:36:56 compute-0 nova_compute[186018]: 2026-01-05 21:36:56.479 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:58 compute-0 nova_compute[186018]: 2026-01-05 21:36:58.054 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:36:58 compute-0 podman[254673]: 2026-01-05 21:36:58.725930614 +0000 UTC m=+0.077498422 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 21:36:58 compute-0 podman[254674]: 2026-01-05 21:36:58.766080781 +0000 UTC m=+0.108020665 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:36:59 compute-0 podman[202426]: time="2026-01-05T21:36:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:36:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:36:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:36:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:36:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4839 "" "Go-http-client/1.1"
Jan 05 21:37:01 compute-0 openstack_network_exporter[205720]: ERROR   21:37:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:37:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:37:01 compute-0 openstack_network_exporter[205720]: ERROR   21:37:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:37:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:37:01 compute-0 nova_compute[186018]: 2026-01-05 21:37:01.482 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:03 compute-0 nova_compute[186018]: 2026-01-05 21:37:03.058 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:05 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 05 21:37:05 compute-0 podman[254716]: 2026-01-05 21:37:05.615135593 +0000 UTC m=+0.062057344 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:37:06 compute-0 ovn_controller[98229]: 2026-01-05T21:37:06Z|00162|memory_trim|INFO|Detected inactivity (last active 30011 ms ago): trimming memory
Jan 05 21:37:06 compute-0 nova_compute[186018]: 2026-01-05 21:37:06.486 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.789 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.790 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.790 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.790 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.797 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '62f57876-af2d-4771-bffd-c87b7755cc5c', 'name': 'tempest-AttachInterfacesUnderV243Test-server-306597775', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e0899289c7dd4631b4fa69150a914123', 'user_id': '168ad639a6ed41c8bd954c434807ef6c', 'hostId': 'c3f8712f401137fbbdc6483d36c041bcfcf3dfa8c8dce0a58aba2f1b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.800 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fe15eddf-ceea-4584-95df-dc1ea54e3c25', 'name': 'te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'be6cfe06-61ed-4c76-8e1d-bc9df6929005'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '0d77496083304392a3bddf3b3cc09d6f', 'user_id': '4adc8921daaf44d4b88d43bd5764da44', 'hostId': '3ca26c7ed0445332f9f9d5b660e6197db7ba063b9bde1e989d152df8', 'status': 'active', 'metadata': {'metering.server_group': '592ac083-4e5e-4ede-94dc-941b228764d4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.800 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.800 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.801 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.801 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.801 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.802 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.802 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.802 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.802 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.802 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:37:07.801121) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.802 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.803 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:37:07.802695) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.807 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.811 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets volume: 9 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.811 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.811 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.811 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.811 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.811 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.812 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.812 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.812 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.812 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.812 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.812 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.812 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.813 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.813 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.813 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.813 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.813 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.814 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.814 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.814 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.814 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.814 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.815 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.815 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.815 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:37:07.811887) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.815 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.815 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:37:07.813025) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.815 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.815 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:37:07.814338) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.815 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.815 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.815 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.816 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.816 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.816 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.816 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.816 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.816 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.817 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.817 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.817 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:37:07.815540) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.817 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:37:07.816880) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.817 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.817 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.818 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.818 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.818 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.818 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.818 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.818 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.818 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.818 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:37:07.818523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.819 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.819 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.819 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.819 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.819 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.819 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.819 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.820 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.820 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.820 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.820 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.821 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.821 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:37:07.819913) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.821 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.821 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.821 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.822 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:37:07.821508) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.840 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.840 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.858 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.858 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.859 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.859 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.859 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.859 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.859 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.859 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.859 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.860 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.860 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.860 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.861 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.861 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:37:07.859810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.861 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.861 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.861 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.861 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.861 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.861 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.862 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.862 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:37:07.861620) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.862 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.862 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.862 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.862 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.862 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.863 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.864 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:37:07.863063) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.884 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/memory.usage volume: 42.60546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.906 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/memory.usage volume: 43.10546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.907 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.907 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.907 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.907 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.907 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.907 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.908 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:37:07.907476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.908 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.908 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.908 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.908 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.908 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.908 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.908 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.908 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:37:07.908604) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.909 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.909 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.909 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.909 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.909 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.909 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.909 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.909 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.910 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.910 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:37:07.910012) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.951 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 31029760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.952 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.988 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.bytes volume: 29568000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.989 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.989 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.990 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.990 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.990 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.990 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.990 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.990 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes volume: 4311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.990 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.bytes volume: 1352 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.991 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.991 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.991 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.991 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:37:07.990439) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.991 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.992 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.992 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.992 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 519177861 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.992 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 51692234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.992 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.latency volume: 575714939 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.993 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.latency volume: 64092754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.993 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.993 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.993 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.993 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.993 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:37:07.992105) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.994 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.994 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 1138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.994 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:37:07.994064) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.994 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.requests volume: 1061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.994 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.995 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.995 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.995 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.995 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.995 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.995 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.995 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.995 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.996 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:37:07.995523) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.996 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.996 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.996 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.996 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.996 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.996 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.996 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.997 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.997 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 73068544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.997 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.997 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.bytes volume: 72863744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.997 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.998 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.998 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:37:07.997015) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.998 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.998 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.998 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.998 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.998 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.998 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/cpu volume: 38270000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.998 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/cpu volume: 203860000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.999 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.999 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:37:07.998583) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.999 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.999 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.999 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.999 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.999 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 13557622904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:07.999 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.000 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.latency volume: 3874481687 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.000 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.000 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.000 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.000 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.000 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.000 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.000 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.001 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.001 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.001 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.requests volume: 314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.001 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.001 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.002 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:37:07.999638) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.002 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:37:08.000929) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.002 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.003 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.004 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.005 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:37:08.006 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:37:08 compute-0 nova_compute[186018]: 2026-01-05 21:37:08.061 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:11 compute-0 nova_compute[186018]: 2026-01-05 21:37:11.489 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:12 compute-0 podman[254743]: 2026-01-05 21:37:12.730527787 +0000 UTC m=+0.082885183 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS)
Jan 05 21:37:12 compute-0 podman[254742]: 2026-01-05 21:37:12.741119446 +0000 UTC m=+0.097628971 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, architecture=x86_64, name=ubi9, build-date=2024-09-18T21:23:30, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, distribution-scope=public, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, vendor=Red Hat, Inc., release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_id=kepler, io.openshift.expose-services=)
Jan 05 21:37:13 compute-0 nova_compute[186018]: 2026-01-05 21:37:13.066 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:16 compute-0 nova_compute[186018]: 2026-01-05 21:37:16.492 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:16 compute-0 podman[254780]: 2026-01-05 21:37:16.749346394 +0000 UTC m=+0.102862699 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, managed_by=edpm_ansible)
Jan 05 21:37:18 compute-0 nova_compute[186018]: 2026-01-05 21:37:18.068 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:19 compute-0 sshd-session[254801]: Connection closed by 172.105.102.10 port 48330
Jan 05 21:37:19 compute-0 sshd-session[254807]: Connection closed by 172.105.102.10 port 48402
Jan 05 21:37:19 compute-0 sshd-session[254806]: error: Protocol major versions differ: 2 vs. 1
Jan 05 21:37:19 compute-0 sshd-session[254806]: banner exchange: Connection from 172.105.102.10 port 48392: could not read protocol version
Jan 05 21:37:19 compute-0 sshd-session[254804]: error: Protocol major versions differ: 2 vs. 1
Jan 05 21:37:19 compute-0 sshd-session[254804]: banner exchange: Connection from 172.105.102.10 port 48378: could not read protocol version
Jan 05 21:37:19 compute-0 sshd-session[254803]: Unable to negotiate with 172.105.102.10 port 48388: no matching key exchange method found. Their offer: diffie-hellman-group1-sha1 [preauth]
Jan 05 21:37:20 compute-0 sshd-session[254811]: Unable to negotiate with 172.105.102.10 port 48410: no matching host key type found. Their offer: ssh-dss [preauth]
Jan 05 21:37:20 compute-0 sshd-session[254805]: Invalid user xprgu from 172.105.102.10 port 48362
Jan 05 21:37:20 compute-0 sshd-session[254805]: Connection closed by invalid user xprgu 172.105.102.10 port 48362 [preauth]
Jan 05 21:37:20 compute-0 sshd-session[254813]: Unable to negotiate with 172.105.102.10 port 48418: no matching host key type found. Their offer: ssh-rsa [preauth]
Jan 05 21:37:20 compute-0 sshd-session[254815]: Connection closed by 172.105.102.10 port 48422 [preauth]
Jan 05 21:37:21 compute-0 sshd-session[254817]: Unable to negotiate with 172.105.102.10 port 48434: no matching host key type found. Their offer: ecdsa-sha2-nistp384 [preauth]
Jan 05 21:37:21 compute-0 sshd-session[254819]: Unable to negotiate with 172.105.102.10 port 48448: no matching host key type found. Their offer: ecdsa-sha2-nistp521 [preauth]
Jan 05 21:37:21 compute-0 nova_compute[186018]: 2026-01-05 21:37:21.495 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:21 compute-0 sshd-session[254821]: Connection closed by 172.105.102.10 port 48464 [preauth]
Jan 05 21:37:21 compute-0 sshd-session[254802]: Connection closed by 172.105.102.10 port 48346 [preauth]
Jan 05 21:37:23 compute-0 nova_compute[186018]: 2026-01-05 21:37:23.071 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:25 compute-0 podman[254824]: 2026-01-05 21:37:25.751959445 +0000 UTC m=+0.097702383 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, build-date=2025-08-20T13:12:41, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, name=ubi9-minimal, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, url=https://catalog.redhat.com/en/search?searchType=containers, maintainer=Red Hat, Inc., vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter)
Jan 05 21:37:25 compute-0 podman[254823]: 2026-01-05 21:37:25.816936326 +0000 UTC m=+0.165748145 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 05 21:37:26 compute-0 nova_compute[186018]: 2026-01-05 21:37:26.500 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:27 compute-0 nova_compute[186018]: 2026-01-05 21:37:27.492 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:37:27 compute-0 nova_compute[186018]: 2026-01-05 21:37:27.494 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:37:28 compute-0 nova_compute[186018]: 2026-01-05 21:37:28.074 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:29 compute-0 nova_compute[186018]: 2026-01-05 21:37:29.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:37:29 compute-0 nova_compute[186018]: 2026-01-05 21:37:29.463 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:37:29 compute-0 nova_compute[186018]: 2026-01-05 21:37:29.464 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:37:29 compute-0 podman[254869]: 2026-01-05 21:37:29.717186362 +0000 UTC m=+0.062238970 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:37:29 compute-0 podman[254868]: 2026-01-05 21:37:29.735149205 +0000 UTC m=+0.084817045 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:37:29 compute-0 podman[202426]: time="2026-01-05T21:37:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:37:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:37:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:37:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:37:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4835 "" "Go-http-client/1.1"
Jan 05 21:37:29 compute-0 nova_compute[186018]: 2026-01-05 21:37:29.974 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:37:29 compute-0 nova_compute[186018]: 2026-01-05 21:37:29.975 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:37:29 compute-0 nova_compute[186018]: 2026-01-05 21:37:29.975 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:37:29 compute-0 nova_compute[186018]: 2026-01-05 21:37:29.975 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62f57876-af2d-4771-bffd-c87b7755cc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:37:31 compute-0 openstack_network_exporter[205720]: ERROR   21:37:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:37:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:37:31 compute-0 openstack_network_exporter[205720]: ERROR   21:37:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:37:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:37:31 compute-0 nova_compute[186018]: 2026-01-05 21:37:31.504 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:33 compute-0 nova_compute[186018]: 2026-01-05 21:37:33.075 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:33 compute-0 nova_compute[186018]: 2026-01-05 21:37:33.805 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updating instance_info_cache with network_info: [{"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:37:33 compute-0 nova_compute[186018]: 2026-01-05 21:37:33.826 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:37:33 compute-0 nova_compute[186018]: 2026-01-05 21:37:33.827 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:37:33 compute-0 nova_compute[186018]: 2026-01-05 21:37:33.828 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:37:33 compute-0 nova_compute[186018]: 2026-01-05 21:37:33.828 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:37:33 compute-0 nova_compute[186018]: 2026-01-05 21:37:33.852 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:37:33 compute-0 nova_compute[186018]: 2026-01-05 21:37:33.853 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:37:33 compute-0 nova_compute[186018]: 2026-01-05 21:37:33.853 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:37:33 compute-0 nova_compute[186018]: 2026-01-05 21:37:33.853 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:37:33 compute-0 nova_compute[186018]: 2026-01-05 21:37:33.933 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:37:33 compute-0 nova_compute[186018]: 2026-01-05 21:37:33.994 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:37:33 compute-0 nova_compute[186018]: 2026-01-05 21:37:33.995 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.054 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.063 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.123 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.126 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.189 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.556 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.558 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4900MB free_disk=72.28667068481445GB free_vcpus=6 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.559 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.559 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.738 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.739 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance fe15eddf-ceea-4584-95df-dc1ea54e3c25 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.740 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 2 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.741 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=768MB phys_disk=79GB used_disk=2GB total_vcpus=8 used_vcpus=2 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.807 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing inventories for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.889 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating ProviderTree inventory for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.890 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating inventory in ProviderTree for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.907 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing aggregate associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.925 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing trait associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_NODE,HW_CPU_X86_BMI,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 21:37:34 compute-0 nova_compute[186018]: 2026-01-05 21:37:34.992 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:37:35 compute-0 nova_compute[186018]: 2026-01-05 21:37:35.006 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:37:35 compute-0 nova_compute[186018]: 2026-01-05 21:37:35.009 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:37:35 compute-0 nova_compute[186018]: 2026-01-05 21:37:35.010 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.450s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:37:36 compute-0 nova_compute[186018]: 2026-01-05 21:37:36.003 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:37:36 compute-0 nova_compute[186018]: 2026-01-05 21:37:36.005 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:37:36 compute-0 nova_compute[186018]: 2026-01-05 21:37:36.005 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:37:36 compute-0 nova_compute[186018]: 2026-01-05 21:37:36.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:37:36 compute-0 nova_compute[186018]: 2026-01-05 21:37:36.508 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:36 compute-0 podman[254919]: 2026-01-05 21:37:36.772290968 +0000 UTC m=+0.127625431 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:37:38 compute-0 nova_compute[186018]: 2026-01-05 21:37:38.080 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:40 compute-0 nova_compute[186018]: 2026-01-05 21:37:40.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:37:41 compute-0 nova_compute[186018]: 2026-01-05 21:37:41.512 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:37:42.876 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:37:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:37:42.876 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:37:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:37:42.877 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:37:43 compute-0 nova_compute[186018]: 2026-01-05 21:37:43.082 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:43 compute-0 podman[254943]: 2026-01-05 21:37:43.726838448 +0000 UTC m=+0.072648814 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, config_id=kepler, io.openshift.tags=base rhel9, container_name=kepler, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, name=ubi9, vcs-type=git, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, io.buildah.version=1.29.0, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, release=1214.1726694543, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, maintainer=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible)
Jan 05 21:37:43 compute-0 podman[254944]: 2026-01-05 21:37:43.739543622 +0000 UTC m=+0.079373381 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 05 21:37:46 compute-0 nova_compute[186018]: 2026-01-05 21:37:46.516 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:47 compute-0 podman[254978]: 2026-01-05 21:37:47.717555415 +0000 UTC m=+0.074967935 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224)
Jan 05 21:37:48 compute-0 nova_compute[186018]: 2026-01-05 21:37:48.084 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:51 compute-0 nova_compute[186018]: 2026-01-05 21:37:51.520 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:53 compute-0 nova_compute[186018]: 2026-01-05 21:37:53.086 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:56 compute-0 nova_compute[186018]: 2026-01-05 21:37:56.525 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:56 compute-0 podman[254999]: 2026-01-05 21:37:56.743284924 +0000 UTC m=+0.090606656 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, vendor=Red Hat, Inc., container_name=openstack_network_exporter, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., version=9.6, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-type=git, io.openshift.expose-services=, config_id=openstack_network_exporter, io.openshift.tags=minimal rhel9)
Jan 05 21:37:56 compute-0 podman[254998]: 2026-01-05 21:37:56.781571572 +0000 UTC m=+0.129794148 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:37:58 compute-0 nova_compute[186018]: 2026-01-05 21:37:58.090 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:37:59 compute-0 podman[202426]: time="2026-01-05T21:37:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:37:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:37:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:37:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:37:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4839 "" "Go-http-client/1.1"
Jan 05 21:38:00 compute-0 podman[255044]: 2026-01-05 21:38:00.771961481 +0000 UTC m=+0.106518535 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:38:00 compute-0 podman[255043]: 2026-01-05 21:38:00.778032221 +0000 UTC m=+0.128892475 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, tcib_managed=true, config_id=ovn_metadata_agent)
Jan 05 21:38:01 compute-0 openstack_network_exporter[205720]: ERROR   21:38:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:38:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:38:01 compute-0 openstack_network_exporter[205720]: ERROR   21:38:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:38:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:38:01 compute-0 nova_compute[186018]: 2026-01-05 21:38:01.529 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:03 compute-0 nova_compute[186018]: 2026-01-05 21:38:03.092 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:06 compute-0 nova_compute[186018]: 2026-01-05 21:38:06.533 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:07 compute-0 podman[255081]: 2026-01-05 21:38:07.74929663 +0000 UTC m=+0.091545101 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:38:08 compute-0 nova_compute[186018]: 2026-01-05 21:38:08.095 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.098 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.099 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.114 186022 DEBUG nova.compute.manager [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.197 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.198 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.206 186022 DEBUG nova.virt.hardware [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.207 186022 INFO nova.compute.claims [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Claim successful on node compute-0.ctlplane.example.com
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.346 186022 DEBUG nova.compute.provider_tree [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.372 186022 DEBUG nova.scheduler.client.report [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.400 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.203s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.401 186022 DEBUG nova.compute.manager [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.439 186022 DEBUG nova.compute.manager [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.440 186022 DEBUG nova.network.neutron [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.458 186022 INFO nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.474 186022 DEBUG nova.compute.manager [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.551 186022 DEBUG nova.compute.manager [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.552 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.553 186022 INFO nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Creating image(s)
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.554 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "/var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.554 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "/var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.555 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "/var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.569 186022 DEBUG oslo_concurrency.processutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.626 186022 DEBUG oslo_concurrency.processutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.627 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "6132ba58e89e5b8de27dca23fb9f4769d454fe9f" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.628 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "6132ba58e89e5b8de27dca23fb9f4769d454fe9f" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.641 186022 DEBUG oslo_concurrency.processutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.678 186022 DEBUG nova.policy [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4adc8921daaf44d4b88d43bd5764da44', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d77496083304392a3bddf3b3cc09d6f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.702 186022 DEBUG oslo_concurrency.processutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.703 186022 DEBUG oslo_concurrency.processutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f,backing_fmt=raw /var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.764 186022 DEBUG oslo_concurrency.processutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f,backing_fmt=raw /var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0/disk 1073741824" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.766 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "6132ba58e89e5b8de27dca23fb9f4769d454fe9f" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.766 186022 DEBUG oslo_concurrency.processutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.839 186022 DEBUG oslo_concurrency.processutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json" returned: 0 in 0.073s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.841 186022 DEBUG nova.virt.disk.api [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Checking if we can resize image /var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.841 186022 DEBUG oslo_concurrency.processutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.916 186022 DEBUG oslo_concurrency.processutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0/disk --force-share --output=json" returned: 0 in 0.075s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.917 186022 DEBUG nova.virt.disk.api [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Cannot resize image /var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.918 186022 DEBUG nova.objects.instance [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lazy-loading 'migration_context' on Instance uuid 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.940 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.941 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Ensure instance console log exists: /var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.942 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.943 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:10 compute-0 nova_compute[186018]: 2026-01-05 21:38:10.943 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.370 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "66b489b4-d427-4eb3-b712-aa91b1410874" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.371 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "66b489b4-d427-4eb3-b712-aa91b1410874" acquired by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.386 186022 DEBUG nova.compute.manager [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Starting instance... _do_build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2402
Jan 05 21:38:11 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:11.422 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=18, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=17) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:38:11 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:11.423 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 6 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.430 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.485 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.485 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.495 186022 DEBUG nova.virt.hardware [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Require both a host and instance NUMA topology to fit instance on host. numa_fit_instance_to_host /usr/lib/python3.9/site-packages/nova/virt/hardware.py:2368
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.496 186022 INFO nova.compute.claims [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Claim successful on node compute-0.ctlplane.example.com
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.531 186022 DEBUG nova.network.neutron [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Successfully created port: 64342629-0b04-40fb-a867-9404e7421cc7 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.536 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.633 186022 DEBUG nova.compute.provider_tree [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.653 186022 DEBUG nova.scheduler.client.report [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.680 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.instance_claim" :: held 0.195s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.682 186022 DEBUG nova.compute.manager [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Start building networks asynchronously for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2799
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.734 186022 DEBUG nova.compute.manager [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Allocating IP information in the background. _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1952
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.735 186022 DEBUG nova.network.neutron [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] allocate_for_instance() allocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1156
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.760 186022 INFO nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Ignoring supplied device name: /dev/vda. Libvirt can't honour user-supplied dev names
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.783 186022 DEBUG nova.compute.manager [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Start building block device mappings for instance. _build_resources /usr/lib/python3.9/site-packages/nova/compute/manager.py:2834
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.870 186022 DEBUG nova.compute.manager [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Start spawning the instance on the hypervisor. _build_and_run_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:2608
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.872 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Creating instance directory _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4723
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.873 186022 INFO nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Creating image(s)
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.874 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "/var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk.info" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.875 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "/var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk.info" acquired by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.876 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "/var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk.info" "released" by "nova.virt.libvirt.imagebackend.Image.resolve_driver_format.<locals>.write_to_disk_info_file" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.891 186022 DEBUG oslo_concurrency.processutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.952 186022 DEBUG oslo_concurrency.processutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.954 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "6132ba58e89e5b8de27dca23fb9f4769d454fe9f" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.955 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "6132ba58e89e5b8de27dca23fb9f4769d454fe9f" acquired by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:11 compute-0 nova_compute[186018]: 2026-01-05 21:38:11.967 186022 DEBUG oslo_concurrency.processutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.027 186022 DEBUG oslo_concurrency.processutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.029 186022 DEBUG oslo_concurrency.processutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f,backing_fmt=raw /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk 1073741824 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.057 186022 DEBUG nova.policy [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Policy check for network:attach_external_network failed with credentials {'is_admin': False, 'user_id': '4adc8921daaf44d4b88d43bd5764da44', 'user_domain_id': 'default', 'system_scope': None, 'domain_id': None, 'project_id': '0d77496083304392a3bddf3b3cc09d6f', 'project_domain_id': 'default', 'roles': ['member', 'reader'], 'is_admin_project': True, 'service_user_id': None, 'service_user_domain_id': None, 'service_project_id': None, 'service_project_domain_id': None, 'service_roles': []} authorize /usr/lib/python3.9/site-packages/nova/policy.py:203
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.072 186022 DEBUG oslo_concurrency.processutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "env LC_ALL=C LANG=C qemu-img create -f qcow2 -o backing_file=/var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f,backing_fmt=raw /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk 1073741824" returned: 0 in 0.043s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.075 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "6132ba58e89e5b8de27dca23fb9f4769d454fe9f" "released" by "nova.virt.libvirt.imagebackend.Qcow2.create_image.<locals>.create_qcow2_image" :: held 0.120s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.077 186022 DEBUG oslo_concurrency.processutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.154 186022 DEBUG oslo_concurrency.processutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/_base/6132ba58e89e5b8de27dca23fb9f4769d454fe9f --force-share --output=json" returned: 0 in 0.077s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.156 186022 DEBUG nova.virt.disk.api [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Checking if we can resize image /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk. size=1073741824 can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:166
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.160 186022 DEBUG oslo_concurrency.processutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.221 186022 DEBUG oslo_concurrency.processutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.223 186022 DEBUG nova.virt.disk.api [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Cannot resize image /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk to a smaller size. can_resize_image /usr/lib/python3.9/site-packages/nova/virt/disk/api.py:172
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.224 186022 DEBUG nova.objects.instance [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lazy-loading 'migration_context' on Instance uuid 66b489b4-d427-4eb3-b712-aa91b1410874 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.240 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Created local disks _create_image /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4857
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.240 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Ensure instance console log exists: /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/console.log _ensure_console_log_for_instance /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4609
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.242 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "vgpu_resources" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.242 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "vgpu_resources" acquired by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:12 compute-0 nova_compute[186018]: 2026-01-05 21:38:12.243 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "vgpu_resources" "released" by "nova.virt.libvirt.driver.LibvirtDriver._allocate_mdevs" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:13 compute-0 nova_compute[186018]: 2026-01-05 21:38:13.096 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:13 compute-0 nova_compute[186018]: 2026-01-05 21:38:13.175 186022 DEBUG nova.network.neutron [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Successfully updated port: 64342629-0b04-40fb-a867-9404e7421cc7 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 21:38:13 compute-0 nova_compute[186018]: 2026-01-05 21:38:13.191 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "refresh_cache-4bc1b97d-0c3d-4616-af67-f8b9ffc067f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:38:13 compute-0 nova_compute[186018]: 2026-01-05 21:38:13.191 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquired lock "refresh_cache-4bc1b97d-0c3d-4616-af67-f8b9ffc067f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:38:13 compute-0 nova_compute[186018]: 2026-01-05 21:38:13.192 186022 DEBUG nova.network.neutron [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 21:38:13 compute-0 nova_compute[186018]: 2026-01-05 21:38:13.326 186022 DEBUG nova.compute.manager [req-9eab1936-3dac-40f7-a336-eec0dc3cdfc8 req-5e4da42f-44fd-4374-8c99-21bb2d00ea25 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Received event network-changed-64342629-0b04-40fb-a867-9404e7421cc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:38:13 compute-0 nova_compute[186018]: 2026-01-05 21:38:13.327 186022 DEBUG nova.compute.manager [req-9eab1936-3dac-40f7-a336-eec0dc3cdfc8 req-5e4da42f-44fd-4374-8c99-21bb2d00ea25 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Refreshing instance network info cache due to event network-changed-64342629-0b04-40fb-a867-9404e7421cc7. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:38:13 compute-0 nova_compute[186018]: 2026-01-05 21:38:13.327 186022 DEBUG oslo_concurrency.lockutils [req-9eab1936-3dac-40f7-a336-eec0dc3cdfc8 req-5e4da42f-44fd-4374-8c99-21bb2d00ea25 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-4bc1b97d-0c3d-4616-af67-f8b9ffc067f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:38:13 compute-0 nova_compute[186018]: 2026-01-05 21:38:13.674 186022 DEBUG nova.network.neutron [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 21:38:13 compute-0 nova_compute[186018]: 2026-01-05 21:38:13.908 186022 DEBUG nova.network.neutron [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Successfully created port: 76d8404e-3237-44da-934d-3e7e8792c114 _create_port_minimal /usr/lib/python3.9/site-packages/nova/network/neutron.py:548
Jan 05 21:38:14 compute-0 podman[255136]: 2026-01-05 21:38:14.756142806 +0000 UTC m=+0.087234458 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, container_name=ceilometer_agent_ipmi, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:38:14 compute-0 podman[255135]: 2026-01-05 21:38:14.770577186 +0000 UTC m=+0.118876181 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, vendor=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, name=ubi9, managed_by=edpm_ansible, io.openshift.tags=base rhel9, io.openshift.expose-services=, architecture=x86_64, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, maintainer=Red Hat, Inc., version=9.4, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.855 186022 DEBUG nova.network.neutron [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Updating instance_info_cache with network_info: [{"id": "64342629-0b04-40fb-a867-9404e7421cc7", "address": "fa:16:3e:0c:5e:3e", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64342629-0b", "ovs_interfaceid": "64342629-0b04-40fb-a867-9404e7421cc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.874 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Releasing lock "refresh_cache-4bc1b97d-0c3d-4616-af67-f8b9ffc067f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.874 186022 DEBUG nova.compute.manager [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Instance network_info: |[{"id": "64342629-0b04-40fb-a867-9404e7421cc7", "address": "fa:16:3e:0c:5e:3e", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64342629-0b", "ovs_interfaceid": "64342629-0b04-40fb-a867-9404e7421cc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.875 186022 DEBUG oslo_concurrency.lockutils [req-9eab1936-3dac-40f7-a336-eec0dc3cdfc8 req-5e4da42f-44fd-4374-8c99-21bb2d00ea25 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-4bc1b97d-0c3d-4616-af67-f8b9ffc067f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.875 186022 DEBUG nova.network.neutron [req-9eab1936-3dac-40f7-a336-eec0dc3cdfc8 req-5e4da42f-44fd-4374-8c99-21bb2d00ea25 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Refreshing network info cache for port 64342629-0b04-40fb-a867-9404e7421cc7 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.879 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Start _get_guest_xml network_info=[{"id": "64342629-0b04-40fb-a867-9404e7421cc7", "address": "fa:16:3e:0c:5e:3e", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64342629-0b", "ovs_interfaceid": "64342629-0b04-40fb-a867-9404e7421cc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:33:24Z,direct_url=<?>,disk_format='qcow2',id=be6cfe06-61ed-4c76-8e1d-bc9df6929005,min_disk=0,min_ram=0,name='tempest-scenario-img--1998831437',owner='0d77496083304392a3bddf3b3cc09d6f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:33:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'encrypted': False, 'encryption_format': None, 'image_id': 'be6cfe06-61ed-4c76-8e1d-bc9df6929005'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.889 186022 WARNING nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.901 186022 DEBUG nova.virt.libvirt.host [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.902 186022 DEBUG nova.virt.libvirt.host [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.907 186022 DEBUG nova.virt.libvirt.host [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.908 186022 DEBUG nova.virt.libvirt.host [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.909 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.910 186022 DEBUG nova.virt.hardware [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T21:29:28Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ce1138a2-4b82-4664-8860-711a956c0882',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:33:24Z,direct_url=<?>,disk_format='qcow2',id=be6cfe06-61ed-4c76-8e1d-bc9df6929005,min_disk=0,min_ram=0,name='tempest-scenario-img--1998831437',owner='0d77496083304392a3bddf3b3cc09d6f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:33:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.911 186022 DEBUG nova.virt.hardware [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.911 186022 DEBUG nova.virt.hardware [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.912 186022 DEBUG nova.virt.hardware [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.913 186022 DEBUG nova.virt.hardware [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.913 186022 DEBUG nova.virt.hardware [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.914 186022 DEBUG nova.virt.hardware [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.915 186022 DEBUG nova.virt.hardware [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.916 186022 DEBUG nova.virt.hardware [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.916 186022 DEBUG nova.virt.hardware [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.917 186022 DEBUG nova.virt.hardware [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.922 186022 DEBUG nova.virt.libvirt.vif [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:38:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-6530778-asg-yb4g67iwlud7-wtpz2iwsyvrj-fzsk7hoskpni',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6530778-asg-yb4g67iwlud7-wtpz2iwsyvrj-fzsk7hoskpni',id=13,image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='592ac083-4e5e-4ede-94dc-941b228764d4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d77496083304392a3bddf3b3cc09d6f',ramdisk_id='',reservation_id='r-b6z8lowl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1091853177',owner_user_name='tempest-PrometheusGabbiTest-1091853177-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:38:10Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='4adc8921daaf44d4b88d43bd5764da44',uuid=4bc1b97d-0c3d-4616-af67-f8b9ffc067f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64342629-0b04-40fb-a867-9404e7421cc7", "address": "fa:16:3e:0c:5e:3e", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64342629-0b", "ovs_interfaceid": "64342629-0b04-40fb-a867-9404e7421cc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.923 186022 DEBUG nova.network.os_vif_util [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converting VIF {"id": "64342629-0b04-40fb-a867-9404e7421cc7", "address": "fa:16:3e:0c:5e:3e", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64342629-0b", "ovs_interfaceid": "64342629-0b04-40fb-a867-9404e7421cc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.925 186022 DEBUG nova.network.os_vif_util [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=64342629-0b04-40fb-a867-9404e7421cc7,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64342629-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.926 186022 DEBUG nova.objects.instance [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lazy-loading 'pci_devices' on Instance uuid 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.941 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] End _get_guest_xml xml=<domain type="kvm">
Jan 05 21:38:14 compute-0 nova_compute[186018]:   <uuid>4bc1b97d-0c3d-4616-af67-f8b9ffc067f0</uuid>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   <name>instance-0000000d</name>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   <memory>131072</memory>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   <vcpu>1</vcpu>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   <metadata>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <nova:name>te-6530778-asg-yb4g67iwlud7-wtpz2iwsyvrj-fzsk7hoskpni</nova:name>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <nova:creationTime>2026-01-05 21:38:14</nova:creationTime>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <nova:flavor name="m1.nano">
Jan 05 21:38:14 compute-0 nova_compute[186018]:         <nova:memory>128</nova:memory>
Jan 05 21:38:14 compute-0 nova_compute[186018]:         <nova:disk>1</nova:disk>
Jan 05 21:38:14 compute-0 nova_compute[186018]:         <nova:swap>0</nova:swap>
Jan 05 21:38:14 compute-0 nova_compute[186018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 05 21:38:14 compute-0 nova_compute[186018]:         <nova:vcpus>1</nova:vcpus>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       </nova:flavor>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <nova:owner>
Jan 05 21:38:14 compute-0 nova_compute[186018]:         <nova:user uuid="4adc8921daaf44d4b88d43bd5764da44">tempest-PrometheusGabbiTest-1091853177-project-member</nova:user>
Jan 05 21:38:14 compute-0 nova_compute[186018]:         <nova:project uuid="0d77496083304392a3bddf3b3cc09d6f">tempest-PrometheusGabbiTest-1091853177</nova:project>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       </nova:owner>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <nova:root type="image" uuid="be6cfe06-61ed-4c76-8e1d-bc9df6929005"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <nova:ports>
Jan 05 21:38:14 compute-0 nova_compute[186018]:         <nova:port uuid="64342629-0b04-40fb-a867-9404e7421cc7">
Jan 05 21:38:14 compute-0 nova_compute[186018]:           <nova:ip type="fixed" address="10.100.0.25" ipVersion="4"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:         </nova:port>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       </nova:ports>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     </nova:instance>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   </metadata>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   <sysinfo type="smbios">
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <system>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <entry name="manufacturer">RDO</entry>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <entry name="product">OpenStack Compute</entry>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <entry name="serial">4bc1b97d-0c3d-4616-af67-f8b9ffc067f0</entry>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <entry name="uuid">4bc1b97d-0c3d-4616-af67-f8b9ffc067f0</entry>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <entry name="family">Virtual Machine</entry>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     </system>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   </sysinfo>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   <os>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <boot dev="hd"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <smbios mode="sysinfo"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   </os>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   <features>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <acpi/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <apic/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <vmcoreinfo/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   </features>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   <clock offset="utc">
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <timer name="hpet" present="no"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   </clock>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   <cpu mode="host-model" match="exact">
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   </cpu>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   <devices>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0/disk"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <target dev="vda" bus="virtio"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <disk type="file" device="cdrom">
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0/disk.config"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <target dev="sda" bus="sata"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <interface type="ethernet">
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <mac address="fa:16:3e:0c:5e:3e"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <mtu size="1442"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <target dev="tap64342629-0b"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     </interface>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <serial type="pty">
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <log file="/var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0/console.log" append="off"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     </serial>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <video>
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     </video>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <input type="tablet" bus="usb"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <rng model="virtio">
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <backend model="random">/dev/urandom</backend>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     </rng>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <controller type="usb" index="0"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     <memballoon model="virtio">
Jan 05 21:38:14 compute-0 nova_compute[186018]:       <stats period="10"/>
Jan 05 21:38:14 compute-0 nova_compute[186018]:     </memballoon>
Jan 05 21:38:14 compute-0 nova_compute[186018]:   </devices>
Jan 05 21:38:14 compute-0 nova_compute[186018]: </domain>
Jan 05 21:38:14 compute-0 nova_compute[186018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.954 186022 DEBUG nova.compute.manager [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Preparing to wait for external event network-vif-plugged-64342629-0b04-40fb-a867-9404e7421cc7 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.955 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.955 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.955 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.956 186022 DEBUG nova.virt.libvirt.vif [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:38:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-6530778-asg-yb4g67iwlud7-wtpz2iwsyvrj-fzsk7hoskpni',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6530778-asg-yb4g67iwlud7-wtpz2iwsyvrj-fzsk7hoskpni',id=13,image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='592ac083-4e5e-4ede-94dc-941b228764d4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d77496083304392a3bddf3b3cc09d6f',ramdisk_id='',reservation_id='r-b6z8lowl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1091853177',owner_user_name='tempest-PrometheusGabbiTest-1091853177-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:38:10Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='4adc8921daaf44d4b88d43bd5764da44',uuid=4bc1b97d-0c3d-4616-af67-f8b9ffc067f0,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "64342629-0b04-40fb-a867-9404e7421cc7", "address": "fa:16:3e:0c:5e:3e", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64342629-0b", "ovs_interfaceid": "64342629-0b04-40fb-a867-9404e7421cc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.956 186022 DEBUG nova.network.os_vif_util [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converting VIF {"id": "64342629-0b04-40fb-a867-9404e7421cc7", "address": "fa:16:3e:0c:5e:3e", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64342629-0b", "ovs_interfaceid": "64342629-0b04-40fb-a867-9404e7421cc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.957 186022 DEBUG nova.network.os_vif_util [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=64342629-0b04-40fb-a867-9404e7421cc7,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64342629-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.957 186022 DEBUG os_vif [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=64342629-0b04-40fb-a867-9404e7421cc7,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64342629-0b') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.960 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.961 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.962 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.966 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.967 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap64342629-0b, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.967 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap64342629-0b, col_values=(('external_ids', {'iface-id': '64342629-0b04-40fb-a867-9404e7421cc7', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:0c:5e:3e', 'vm-uuid': '4bc1b97d-0c3d-4616-af67-f8b9ffc067f0'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.969 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.971 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:38:14 compute-0 NetworkManager[56598]: <info>  [1767649094.9730] manager: (tap64342629-0b): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/68)
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.979 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.981 186022 INFO os_vif [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=64342629-0b04-40fb-a867-9404e7421cc7,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64342629-0b')
Jan 05 21:38:14 compute-0 nova_compute[186018]: 2026-01-05 21:38:14.998 186022 DEBUG nova.network.neutron [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Successfully updated port: 76d8404e-3237-44da-934d-3e7e8792c114 _update_port /usr/lib/python3.9/site-packages/nova/network/neutron.py:586
Jan 05 21:38:15 compute-0 nova_compute[186018]: 2026-01-05 21:38:15.038 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "refresh_cache-66b489b4-d427-4eb3-b712-aa91b1410874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:38:15 compute-0 nova_compute[186018]: 2026-01-05 21:38:15.039 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquired lock "refresh_cache-66b489b4-d427-4eb3-b712-aa91b1410874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:38:15 compute-0 nova_compute[186018]: 2026-01-05 21:38:15.039 186022 DEBUG nova.network.neutron [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Building network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2010
Jan 05 21:38:15 compute-0 nova_compute[186018]: 2026-01-05 21:38:15.072 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:38:15 compute-0 nova_compute[186018]: 2026-01-05 21:38:15.073 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:38:15 compute-0 nova_compute[186018]: 2026-01-05 21:38:15.073 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] No VIF found with MAC fa:16:3e:0c:5e:3e, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 21:38:15 compute-0 nova_compute[186018]: 2026-01-05 21:38:15.074 186022 INFO nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Using config drive
Jan 05 21:38:15 compute-0 nova_compute[186018]: 2026-01-05 21:38:15.292 186022 DEBUG nova.network.neutron [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Instance cache missing network info. _get_preexisting_port_ids /usr/lib/python3.9/site-packages/nova/network/neutron.py:3323
Jan 05 21:38:15 compute-0 nova_compute[186018]: 2026-01-05 21:38:15.778 186022 INFO nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Creating config drive at /var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0/disk.config
Jan 05 21:38:15 compute-0 nova_compute[186018]: 2026-01-05 21:38:15.785 186022 DEBUG oslo_concurrency.processutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp18hi4gw0 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:15 compute-0 nova_compute[186018]: 2026-01-05 21:38:15.903 186022 DEBUG nova.compute.manager [req-b2d8803e-4ffb-4d7d-9d7f-c8d5d34a093e req-66074d76-bde1-4aa4-898e-138c42265179 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Received event network-changed-76d8404e-3237-44da-934d-3e7e8792c114 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:38:15 compute-0 nova_compute[186018]: 2026-01-05 21:38:15.904 186022 DEBUG nova.compute.manager [req-b2d8803e-4ffb-4d7d-9d7f-c8d5d34a093e req-66074d76-bde1-4aa4-898e-138c42265179 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Refreshing instance network info cache due to event network-changed-76d8404e-3237-44da-934d-3e7e8792c114. external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11053
Jan 05 21:38:15 compute-0 nova_compute[186018]: 2026-01-05 21:38:15.904 186022 DEBUG oslo_concurrency.lockutils [req-b2d8803e-4ffb-4d7d-9d7f-c8d5d34a093e req-66074d76-bde1-4aa4-898e-138c42265179 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "refresh_cache-66b489b4-d427-4eb3-b712-aa91b1410874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:38:15 compute-0 nova_compute[186018]: 2026-01-05 21:38:15.912 186022 DEBUG oslo_concurrency.processutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmp18hi4gw0" returned: 0 in 0.128s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:15 compute-0 kernel: tap64342629-0b: entered promiscuous mode
Jan 05 21:38:15 compute-0 NetworkManager[56598]: <info>  [1767649095.9886] manager: (tap64342629-0b): new Tun device (/org/freedesktop/NetworkManager/Devices/69)
Jan 05 21:38:15 compute-0 nova_compute[186018]: 2026-01-05 21:38:15.989 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:15 compute-0 ovn_controller[98229]: 2026-01-05T21:38:15Z|00163|binding|INFO|Claiming lport 64342629-0b04-40fb-a867-9404e7421cc7 for this chassis.
Jan 05 21:38:15 compute-0 ovn_controller[98229]: 2026-01-05T21:38:15Z|00164|binding|INFO|64342629-0b04-40fb-a867-9404e7421cc7: Claiming fa:16:3e:0c:5e:3e 10.100.0.25
Jan 05 21:38:15 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:15.998 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:5e:3e 10.100.0.25'], port_security=['fa:16:3e:0c:5e:3e 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/16', 'neutron:device_id': '4bc1b97d-0c3d-4616-af67-f8b9ffc067f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d77496083304392a3bddf3b3cc09d6f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6045589-62d6-4436-a4e5-3eada182f76e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5730d3f-9ce0-49ab-a945-1714805ce7f9, chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=64342629-0b04-40fb-a867-9404e7421cc7) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:38:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:16.000 107689 INFO neutron.agent.ovn.metadata.agent [-] Port 64342629-0b04-40fb-a867-9404e7421cc7 in datapath cfd3046a-c974-4a8e-be8e-0c5c965904ab bound to our chassis
Jan 05 21:38:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:16.001 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfd3046a-c974-4a8e-be8e-0c5c965904ab
Jan 05 21:38:16 compute-0 ovn_controller[98229]: 2026-01-05T21:38:16Z|00165|binding|INFO|Setting lport 64342629-0b04-40fb-a867-9404e7421cc7 ovn-installed in OVS
Jan 05 21:38:16 compute-0 ovn_controller[98229]: 2026-01-05T21:38:16Z|00166|binding|INFO|Setting lport 64342629-0b04-40fb-a867-9404e7421cc7 up in Southbound
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.023 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:16.023 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[6be3e86e-8f4b-45b9-aca0-9434955ae416]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.027 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:16 compute-0 systemd-udevd[255193]: Network interface NamePolicy= disabled on kernel command line.
Jan 05 21:38:16 compute-0 NetworkManager[56598]: <info>  [1767649096.0451] device (tap64342629-0b): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 21:38:16 compute-0 NetworkManager[56598]: <info>  [1767649096.0457] device (tap64342629-0b): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 21:38:16 compute-0 systemd-machined[157312]: New machine qemu-14-instance-0000000d.
Jan 05 21:38:16 compute-0 systemd[1]: Started Virtual Machine qemu-14-instance-0000000d.
Jan 05 21:38:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:16.060 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[bf3ac527-8148-4f84-a816-7182b92d5737]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:16.065 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[32ab2e52-f195-48e2-be9c-1c86218666ab]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:16.099 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[59f8586f-0c1b-422d-9739-bef42eec835d]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:16.116 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[c0770241-5bc3-48d8-8455-34a636f465c7]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfd3046a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:25:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 5, 'rx_bytes': 616, 'tx_bytes': 354, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556128, 'reachable_time': 16036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255203, 'error': None, 'target': 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:16.133 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[06febabb-460a-4b21-80fb-ae124fb8bfb3]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfd3046a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 556145, 'tstamp': 556145}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255207, 'error': None, 'target': 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapcfd3046a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 556148, 'tstamp': 556148}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255207, 'error': None, 'target': 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:16.135 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfd3046a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.137 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.139 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:16.140 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfd3046a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:16.140 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:38:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:16.141 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfd3046a-c0, col_values=(('external_ids', {'iface-id': '68b7e7cf-3a36-4106-85be-cc39d85ff653'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:16.141 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.499 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767649096.4983864, 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.499 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] VM Started (Lifecycle Event)
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.517 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.523 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767649096.4986465, 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.523 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] VM Paused (Lifecycle Event)
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.537 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.542 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Synchronizing instance power state after lifecycle event "Paused"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 3 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.558 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.796 186022 DEBUG nova.network.neutron [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Updating instance_info_cache with network_info: [{"id": "76d8404e-3237-44da-934d-3e7e8792c114", "address": "fa:16:3e:58:ee:ae", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d8404e-32", "ovs_interfaceid": "76d8404e-3237-44da-934d-3e7e8792c114", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.799 186022 DEBUG nova.network.neutron [req-9eab1936-3dac-40f7-a336-eec0dc3cdfc8 req-5e4da42f-44fd-4374-8c99-21bb2d00ea25 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Updated VIF entry in instance network info cache for port 64342629-0b04-40fb-a867-9404e7421cc7. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.800 186022 DEBUG nova.network.neutron [req-9eab1936-3dac-40f7-a336-eec0dc3cdfc8 req-5e4da42f-44fd-4374-8c99-21bb2d00ea25 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Updating instance_info_cache with network_info: [{"id": "64342629-0b04-40fb-a867-9404e7421cc7", "address": "fa:16:3e:0c:5e:3e", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64342629-0b", "ovs_interfaceid": "64342629-0b04-40fb-a867-9404e7421cc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.817 186022 DEBUG oslo_concurrency.lockutils [req-9eab1936-3dac-40f7-a336-eec0dc3cdfc8 req-5e4da42f-44fd-4374-8c99-21bb2d00ea25 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-4bc1b97d-0c3d-4616-af67-f8b9ffc067f0" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.822 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Releasing lock "refresh_cache-66b489b4-d427-4eb3-b712-aa91b1410874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.822 186022 DEBUG nova.compute.manager [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Instance network_info: |[{"id": "76d8404e-3237-44da-934d-3e7e8792c114", "address": "fa:16:3e:58:ee:ae", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d8404e-32", "ovs_interfaceid": "76d8404e-3237-44da-934d-3e7e8792c114", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}]| _allocate_network_async /usr/lib/python3.9/site-packages/nova/compute/manager.py:1967
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.822 186022 DEBUG oslo_concurrency.lockutils [req-b2d8803e-4ffb-4d7d-9d7f-c8d5d34a093e req-66074d76-bde1-4aa4-898e-138c42265179 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquired lock "refresh_cache-66b489b4-d427-4eb3-b712-aa91b1410874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.823 186022 DEBUG nova.network.neutron [req-b2d8803e-4ffb-4d7d-9d7f-c8d5d34a093e req-66074d76-bde1-4aa4-898e-138c42265179 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Refreshing network info cache for port 76d8404e-3237-44da-934d-3e7e8792c114 _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2007
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.825 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Start _get_guest_xml network_info=[{"id": "76d8404e-3237-44da-934d-3e7e8792c114", "address": "fa:16:3e:58:ee:ae", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d8404e-32", "ovs_interfaceid": "76d8404e-3237-44da-934d-3e7e8792c114", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] disk_info={'disk_bus': 'virtio', 'cdrom_bus': 'sata', 'mapping': {'root': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk': {'bus': 'virtio', 'dev': 'vda', 'type': 'disk', 'boot_index': '1'}, 'disk.config': {'bus': 'sata', 'dev': 'sda', 'type': 'cdrom'}}} image_meta=ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:33:24Z,direct_url=<?>,disk_format='qcow2',id=be6cfe06-61ed-4c76-8e1d-bc9df6929005,min_disk=0,min_ram=0,name='tempest-scenario-img--1998831437',owner='0d77496083304392a3bddf3b3cc09d6f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:33:25Z,virtual_size=<?>,visibility=<?>) rescue=None block_device_info={'root_device_name': '/dev/vda', 'image': [{'guest_format': None, 'device_name': '/dev/vda', 'boot_index': 0, 'device_type': 'disk', 'disk_bus': 'virtio', 'encryption_secret_uuid': None, 'encryption_options': None, 'size': 0, 'encrypted': False, 'encryption_format': None, 'image_id': 'be6cfe06-61ed-4c76-8e1d-bc9df6929005'}], 'ephemerals': [], 'block_device_mapping': [], 'swap': None} _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7549
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.831 186022 WARNING nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.840 186022 DEBUG nova.virt.libvirt.host [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V1... _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1653
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.841 186022 DEBUG nova.virt.libvirt.host [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CPU controller missing on host. _has_cgroupsv1_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1663
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.845 186022 DEBUG nova.virt.libvirt.host [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Searching host: 'compute-0.ctlplane.example.com' for CPU controller through CGroups V2... _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1672
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.845 186022 DEBUG nova.virt.libvirt.host [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CPU controller found on host. _has_cgroupsv2_cpu_controller /usr/lib/python3.9/site-packages/nova/virt/libvirt/host.py:1679
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.845 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CPU mode 'host-model' models '' was chosen, with extra flags: '' _get_guest_cpu_model_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:5396
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.846 186022 DEBUG nova.virt.hardware [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Getting desirable topologies for flavor Flavor(created_at=2026-01-05T21:29:28Z,deleted=False,deleted_at=None,description=None,disabled=False,ephemeral_gb=0,extra_specs={hw_rng:allowed='True'},flavorid='ce1138a2-4b82-4664-8860-711a956c0882',id=3,is_public=True,memory_mb=128,name='m1.nano',projects=<?>,root_gb=1,rxtx_factor=1.0,swap=0,updated_at=None,vcpu_weight=0,vcpus=1) and image_meta ImageMeta(checksum='c8fc807773e5354afe61636071771906',container_format='bare',created_at=2026-01-05T21:33:24Z,direct_url=<?>,disk_format='qcow2',id=be6cfe06-61ed-4c76-8e1d-bc9df6929005,min_disk=0,min_ram=0,name='tempest-scenario-img--1998831437',owner='0d77496083304392a3bddf3b3cc09d6f',properties=ImageMetaProps,protected=<?>,size=21430272,status='active',tags=<?>,updated_at=2026-01-05T21:33:25Z,virtual_size=<?>,visibility=<?>), allow threads: True _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:563
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.846 186022 DEBUG nova.virt.hardware [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Flavor limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:348
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.846 186022 DEBUG nova.virt.hardware [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Image limits 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:352
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.846 186022 DEBUG nova.virt.hardware [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Flavor pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:388
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.847 186022 DEBUG nova.virt.hardware [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Image pref 0:0:0 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:392
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.847 186022 DEBUG nova.virt.hardware [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Chose sockets=0, cores=0, threads=0; limits were sockets=65536, cores=65536, threads=65536 get_cpu_topology_constraints /usr/lib/python3.9/site-packages/nova/virt/hardware.py:430
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.847 186022 DEBUG nova.virt.hardware [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Topology preferred VirtCPUTopology(cores=0,sockets=0,threads=0), maximum VirtCPUTopology(cores=65536,sockets=65536,threads=65536) _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:569
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.847 186022 DEBUG nova.virt.hardware [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Build topologies for 1 vcpu(s) 1:1:1 _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:471
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.847 186022 DEBUG nova.virt.hardware [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Got 1 possible topologies _get_possible_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:501
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.848 186022 DEBUG nova.virt.hardware [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Possible topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:575
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.848 186022 DEBUG nova.virt.hardware [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Sorted desired topologies [VirtCPUTopology(cores=1,sockets=1,threads=1)] _get_desirable_cpu_topologies /usr/lib/python3.9/site-packages/nova/virt/hardware.py:577
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.851 186022 DEBUG nova.virt.libvirt.vif [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:38:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut',id=14,image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='592ac083-4e5e-4ede-94dc-941b228764d4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d77496083304392a3bddf3b3cc09d6f',ramdisk_id='',reservation_id='r-130i0h19',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1091853177',owner_user_name='tempest-PrometheusGabbiTest-1091853177-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:38:11Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='4adc8921daaf44d4b88d43bd5764da44',uuid=66b489b4-d427-4eb3-b712-aa91b1410874,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76d8404e-3237-44da-934d-3e7e8792c114", "address": "fa:16:3e:58:ee:ae", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d8404e-32", "ovs_interfaceid": "76d8404e-3237-44da-934d-3e7e8792c114", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} virt_type=kvm get_config /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:563
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.851 186022 DEBUG nova.network.os_vif_util [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converting VIF {"id": "76d8404e-3237-44da-934d-3e7e8792c114", "address": "fa:16:3e:58:ee:ae", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d8404e-32", "ovs_interfaceid": "76d8404e-3237-44da-934d-3e7e8792c114", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.852 186022 DEBUG nova.network.os_vif_util [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:ee:ae,bridge_name='br-int',has_traffic_filtering=True,id=76d8404e-3237-44da-934d-3e7e8792c114,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d8404e-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.853 186022 DEBUG nova.objects.instance [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lazy-loading 'pci_devices' on Instance uuid 66b489b4-d427-4eb3-b712-aa91b1410874 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.869 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] End _get_guest_xml xml=<domain type="kvm">
Jan 05 21:38:16 compute-0 nova_compute[186018]:   <uuid>66b489b4-d427-4eb3-b712-aa91b1410874</uuid>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   <name>instance-0000000e</name>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   <memory>131072</memory>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   <vcpu>1</vcpu>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   <metadata>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <nova:instance xmlns:nova="http://openstack.org/xmlns/libvirt/nova/1.1">
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <nova:package version="27.5.2-0.20250829104910.6f8decf.el9"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <nova:name>te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut</nova:name>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <nova:creationTime>2026-01-05 21:38:16</nova:creationTime>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <nova:flavor name="m1.nano">
Jan 05 21:38:16 compute-0 nova_compute[186018]:         <nova:memory>128</nova:memory>
Jan 05 21:38:16 compute-0 nova_compute[186018]:         <nova:disk>1</nova:disk>
Jan 05 21:38:16 compute-0 nova_compute[186018]:         <nova:swap>0</nova:swap>
Jan 05 21:38:16 compute-0 nova_compute[186018]:         <nova:ephemeral>0</nova:ephemeral>
Jan 05 21:38:16 compute-0 nova_compute[186018]:         <nova:vcpus>1</nova:vcpus>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       </nova:flavor>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <nova:owner>
Jan 05 21:38:16 compute-0 nova_compute[186018]:         <nova:user uuid="4adc8921daaf44d4b88d43bd5764da44">tempest-PrometheusGabbiTest-1091853177-project-member</nova:user>
Jan 05 21:38:16 compute-0 nova_compute[186018]:         <nova:project uuid="0d77496083304392a3bddf3b3cc09d6f">tempest-PrometheusGabbiTest-1091853177</nova:project>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       </nova:owner>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <nova:root type="image" uuid="be6cfe06-61ed-4c76-8e1d-bc9df6929005"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <nova:ports>
Jan 05 21:38:16 compute-0 nova_compute[186018]:         <nova:port uuid="76d8404e-3237-44da-934d-3e7e8792c114">
Jan 05 21:38:16 compute-0 nova_compute[186018]:           <nova:ip type="fixed" address="10.100.2.244" ipVersion="4"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:         </nova:port>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       </nova:ports>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     </nova:instance>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   </metadata>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   <sysinfo type="smbios">
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <system>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <entry name="manufacturer">RDO</entry>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <entry name="product">OpenStack Compute</entry>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <entry name="version">27.5.2-0.20250829104910.6f8decf.el9</entry>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <entry name="serial">66b489b4-d427-4eb3-b712-aa91b1410874</entry>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <entry name="uuid">66b489b4-d427-4eb3-b712-aa91b1410874</entry>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <entry name="family">Virtual Machine</entry>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     </system>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   </sysinfo>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   <os>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <type arch="x86_64" machine="q35">hvm</type>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <boot dev="hd"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <smbios mode="sysinfo"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   </os>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   <features>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <acpi/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <apic/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <vmcoreinfo/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   </features>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   <clock offset="utc">
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <timer name="pit" tickpolicy="delay"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <timer name="rtc" tickpolicy="catchup"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <timer name="hpet" present="no"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   </clock>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   <cpu mode="host-model" match="exact">
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <topology sockets="1" cores="1" threads="1"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   </cpu>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   <devices>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <disk type="file" device="disk">
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <driver name="qemu" type="qcow2" cache="none"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <target dev="vda" bus="virtio"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <disk type="file" device="cdrom">
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <driver name="qemu" type="raw" cache="none"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <source file="/var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk.config"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <target dev="sda" bus="sata"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     </disk>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <interface type="ethernet">
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <mac address="fa:16:3e:58:ee:ae"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <driver name="vhost" rx_queue_size="512"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <mtu size="1442"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <target dev="tap76d8404e-32"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     </interface>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <serial type="pty">
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <log file="/var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/console.log" append="off"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     </serial>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <graphics type="vnc" autoport="yes" listen="::0"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <video>
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <model type="virtio"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     </video>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <input type="tablet" bus="usb"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <rng model="virtio">
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <backend model="random">/dev/urandom</backend>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     </rng>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="pci" model="pcie-root-port"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <controller type="usb" index="0"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     <memballoon model="virtio">
Jan 05 21:38:16 compute-0 nova_compute[186018]:       <stats period="10"/>
Jan 05 21:38:16 compute-0 nova_compute[186018]:     </memballoon>
Jan 05 21:38:16 compute-0 nova_compute[186018]:   </devices>
Jan 05 21:38:16 compute-0 nova_compute[186018]: </domain>
Jan 05 21:38:16 compute-0 nova_compute[186018]:  _get_guest_xml /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:7555
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.869 186022 DEBUG nova.compute.manager [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Preparing to wait for external event network-vif-plugged-76d8404e-3237-44da-934d-3e7e8792c114 prepare_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:283
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.869 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.870 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" acquired by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.870 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" "released" by "nova.compute.manager.InstanceEvents.prepare_for_instance_event.<locals>._create_or_get_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.870 186022 DEBUG nova.virt.libvirt.vif [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='',created_at=2026-01-05T21:38:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=None,disable_terminate=False,display_description=None,display_name='te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut',ec2_ids=EC2Ids,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut',id=14,image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=KeyPairList,launch_index=0,launched_at=None,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='592ac083-4e5e-4ede-94dc-941b228764d4'},migration_context=None,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=PciDeviceList,pci_requests=InstancePCIRequests,power_state=0,progress=0,project_id='0d77496083304392a3bddf3b3cc09d6f',ramdisk_id='',reservation_id='r-130i0h19',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',image_container_format='bare',image_disk_format='qcow2',image_hw_machine_type='q35',image_min_disk='1',image_min_ram='0',network_allocated='True',owner_project_name='tempest-PrometheusGabbiTest-1091853177',owner_user_name='tempest-PrometheusGabbiTest-1091853177-project-member'},tags=TagList,task_state='spawning',terminated_at=None,trusted_certs=None,updated_at=2026-01-05T21:38:11Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='4adc8921daaf44d4b88d43bd5764da44',uuid=66b489b4-d427-4eb3-b712-aa91b1410874,vcpu_model=VirtCPUModel,vcpus=1,vm_mode=None,vm_state='building') vif={"id": "76d8404e-3237-44da-934d-3e7e8792c114", "address": "fa:16:3e:58:ee:ae", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d8404e-32", "ovs_interfaceid": "76d8404e-3237-44da-934d-3e7e8792c114", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} plug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:710
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.871 186022 DEBUG nova.network.os_vif_util [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converting VIF {"id": "76d8404e-3237-44da-934d-3e7e8792c114", "address": "fa:16:3e:58:ee:ae", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d8404e-32", "ovs_interfaceid": "76d8404e-3237-44da-934d-3e7e8792c114", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.871 186022 DEBUG nova.network.os_vif_util [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:58:ee:ae,bridge_name='br-int',has_traffic_filtering=True,id=76d8404e-3237-44da-934d-3e7e8792c114,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d8404e-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.872 186022 DEBUG os_vif [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Plugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:ee:ae,bridge_name='br-int',has_traffic_filtering=True,id=76d8404e-3237-44da-934d-3e7e8792c114,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d8404e-32') plug /usr/lib/python3.9/site-packages/os_vif/__init__.py:76
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.872 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.872 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddBridgeCommand(_result=None, name=br-int, may_exist=True, datapath_type=system) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.873 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.876 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.876 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tap76d8404e-32, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.876 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=1): DbSetCommand(_result=None, table=Interface, record=tap76d8404e-32, col_values=(('external_ids', {'iface-id': '76d8404e-3237-44da-934d-3e7e8792c114', 'iface-status': 'active', 'attached-mac': 'fa:16:3e:58:ee:ae', 'vm-uuid': '66b489b4-d427-4eb3-b712-aa91b1410874'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.878 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:16 compute-0 NetworkManager[56598]: <info>  [1767649096.8791] manager: (tap76d8404e-32): new Open vSwitch Port device (/org/freedesktop/NetworkManager/Devices/70)
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.882 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.888 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.889 186022 INFO os_vif [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Successfully plugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:58:ee:ae,bridge_name='br-int',has_traffic_filtering=True,id=76d8404e-3237-44da-934d-3e7e8792c114,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d8404e-32')
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.940 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] No BDM found with device name vda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.940 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] No BDM found with device name sda, not building metadata. _build_disk_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12116
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.940 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] No VIF found with MAC fa:16:3e:58:ee:ae, not building metadata _build_interface_metadata /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:12092
Jan 05 21:38:16 compute-0 nova_compute[186018]: 2026-01-05 21:38:16.941 186022 INFO nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Using config drive
Jan 05 21:38:17 compute-0 nova_compute[186018]: 2026-01-05 21:38:17.318 186022 INFO nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Creating config drive at /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk.config
Jan 05 21:38:17 compute-0 nova_compute[186018]: 2026-01-05 21:38:17.333 186022 DEBUG oslo_concurrency.processutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Running cmd (subprocess): /usr/bin/mkisofs -o /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvmi21bu8 execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:17 compute-0 systemd[1]: Starting libvirt proxy daemon...
Jan 05 21:38:17 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:17.426 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '18'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:17 compute-0 systemd[1]: Started libvirt proxy daemon.
Jan 05 21:38:17 compute-0 nova_compute[186018]: 2026-01-05 21:38:17.469 186022 DEBUG oslo_concurrency.processutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] CMD "/usr/bin/mkisofs -o /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk.config -ldots -allow-lowercase -allow-multidot -l -publisher OpenStack Compute 27.5.2-0.20250829104910.6f8decf.el9 -quiet -J -r -V config-2 /tmp/tmpvmi21bu8" returned: 0 in 0.136s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:17 compute-0 NetworkManager[56598]: <info>  [1767649097.5529] manager: (tap76d8404e-32): new Tun device (/org/freedesktop/NetworkManager/Devices/71)
Jan 05 21:38:17 compute-0 kernel: tap76d8404e-32: entered promiscuous mode
Jan 05 21:38:17 compute-0 nova_compute[186018]: 2026-01-05 21:38:17.565 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:17 compute-0 ovn_controller[98229]: 2026-01-05T21:38:17Z|00167|binding|INFO|Claiming lport 76d8404e-3237-44da-934d-3e7e8792c114 for this chassis.
Jan 05 21:38:17 compute-0 ovn_controller[98229]: 2026-01-05T21:38:17Z|00168|binding|INFO|76d8404e-3237-44da-934d-3e7e8792c114: Claiming fa:16:3e:58:ee:ae 10.100.2.244
Jan 05 21:38:17 compute-0 NetworkManager[56598]: <info>  [1767649097.5705] device (tap76d8404e-32): state change: unmanaged -> unavailable (reason 'connection-assumed', managed-type: 'external')
Jan 05 21:38:17 compute-0 NetworkManager[56598]: <info>  [1767649097.5725] device (tap76d8404e-32): state change: unavailable -> disconnected (reason 'none', managed-type: 'external')
Jan 05 21:38:17 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:17.575 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:ee:ae 10.100.2.244'], port_security=['fa:16:3e:58:ee:ae 10.100.2.244'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.244/16', 'neutron:device_id': '66b489b4-d427-4eb3-b712-aa91b1410874', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d77496083304392a3bddf3b3cc09d6f', 'neutron:revision_number': '2', 'neutron:security_group_ids': 'e6045589-62d6-4436-a4e5-3eada182f76e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5730d3f-9ce0-49ab-a945-1714805ce7f9, chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=76d8404e-3237-44da-934d-3e7e8792c114) old=Port_Binding(chassis=[]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:38:17 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:17.577 107689 INFO neutron.agent.ovn.metadata.agent [-] Port 76d8404e-3237-44da-934d-3e7e8792c114 in datapath cfd3046a-c974-4a8e-be8e-0c5c965904ab bound to our chassis
Jan 05 21:38:17 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:17.579 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfd3046a-c974-4a8e-be8e-0c5c965904ab
Jan 05 21:38:17 compute-0 ovn_controller[98229]: 2026-01-05T21:38:17Z|00169|binding|INFO|Setting lport 76d8404e-3237-44da-934d-3e7e8792c114 ovn-installed in OVS
Jan 05 21:38:17 compute-0 nova_compute[186018]: 2026-01-05 21:38:17.590 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:17 compute-0 ovn_controller[98229]: 2026-01-05T21:38:17Z|00170|binding|INFO|Setting lport 76d8404e-3237-44da-934d-3e7e8792c114 up in Southbound
Jan 05 21:38:17 compute-0 nova_compute[186018]: 2026-01-05 21:38:17.593 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:17 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:17.611 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[2b96eac4-f70d-46f7-9b05-53da3bed82ed]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:17 compute-0 systemd-machined[157312]: New machine qemu-15-instance-0000000e.
Jan 05 21:38:17 compute-0 systemd[1]: Started Virtual Machine qemu-15-instance-0000000e.
Jan 05 21:38:17 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:17.640 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[47ff7a15-bf82-4227-a120-8d5e8555ef4b]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:17 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:17.644 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[59748174-5fbb-4479-b53b-ea7afb124e9a]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:17 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:17.677 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[3244172d-a67f-40d1-8536-9a4d0d064c08]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:17 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:17.694 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[b57b77cd-659e-4eb4-baba-1effa668f3bb]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfd3046a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:25:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 7, 'rx_bytes': 616, 'tx_bytes': 438, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556128, 'reachable_time': 16036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255260, 'error': None, 'target': 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:17 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:17.715 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[9d96b087-6147-490b-96f5-bb51e82b8e7c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfd3046a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 556145, 'tstamp': 556145}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255262, 'error': None, 'target': 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapcfd3046a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 556148, 'tstamp': 556148}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255262, 'error': None, 'target': 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:17 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:17.717 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfd3046a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:17 compute-0 nova_compute[186018]: 2026-01-05 21:38:17.719 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:17 compute-0 nova_compute[186018]: 2026-01-05 21:38:17.720 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:17 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:17.720 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfd3046a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:17 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:17.721 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:38:17 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:17.721 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfd3046a-c0, col_values=(('external_ids', {'iface-id': '68b7e7cf-3a36-4106-85be-cc39d85ff653'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:17 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:17.721 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.000 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767649097.9999719, 66b489b4-d427-4eb3-b712-aa91b1410874 => Started> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.001 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] VM Started (Lifecycle Event)
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.022 186022 DEBUG nova.compute.manager [req-8492a452-93f4-4149-b4b8-3a70a816b7ab req-69549618-068f-459e-991c-a2510fbb8808 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Received event network-vif-plugged-64342629-0b04-40fb-a867-9404e7421cc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.023 186022 DEBUG oslo_concurrency.lockutils [req-8492a452-93f4-4149-b4b8-3a70a816b7ab req-69549618-068f-459e-991c-a2510fbb8808 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.023 186022 DEBUG oslo_concurrency.lockutils [req-8492a452-93f4-4149-b4b8-3a70a816b7ab req-69549618-068f-459e-991c-a2510fbb8808 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.023 186022 DEBUG oslo_concurrency.lockutils [req-8492a452-93f4-4149-b4b8-3a70a816b7ab req-69549618-068f-459e-991c-a2510fbb8808 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.023 186022 DEBUG nova.compute.manager [req-8492a452-93f4-4149-b4b8-3a70a816b7ab req-69549618-068f-459e-991c-a2510fbb8808 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Processing event network-vif-plugged-64342629-0b04-40fb-a867-9404e7421cc7 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.024 186022 DEBUG nova.compute.manager [req-8492a452-93f4-4149-b4b8-3a70a816b7ab req-69549618-068f-459e-991c-a2510fbb8808 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Received event network-vif-plugged-64342629-0b04-40fb-a867-9404e7421cc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.024 186022 DEBUG oslo_concurrency.lockutils [req-8492a452-93f4-4149-b4b8-3a70a816b7ab req-69549618-068f-459e-991c-a2510fbb8808 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.024 186022 DEBUG oslo_concurrency.lockutils [req-8492a452-93f4-4149-b4b8-3a70a816b7ab req-69549618-068f-459e-991c-a2510fbb8808 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.024 186022 DEBUG oslo_concurrency.lockutils [req-8492a452-93f4-4149-b4b8-3a70a816b7ab req-69549618-068f-459e-991c-a2510fbb8808 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.024 186022 DEBUG nova.compute.manager [req-8492a452-93f4-4149-b4b8-3a70a816b7ab req-69549618-068f-459e-991c-a2510fbb8808 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] No waiting events found dispatching network-vif-plugged-64342629-0b04-40fb-a867-9404e7421cc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.025 186022 WARNING nova.compute.manager [req-8492a452-93f4-4149-b4b8-3a70a816b7ab req-69549618-068f-459e-991c-a2510fbb8808 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Received unexpected event network-vif-plugged-64342629-0b04-40fb-a867-9404e7421cc7 for instance with vm_state building and task_state spawning.
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.025 186022 DEBUG nova.compute.manager [req-8492a452-93f4-4149-b4b8-3a70a816b7ab req-69549618-068f-459e-991c-a2510fbb8808 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Received event network-vif-plugged-76d8404e-3237-44da-934d-3e7e8792c114 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.025 186022 DEBUG oslo_concurrency.lockutils [req-8492a452-93f4-4149-b4b8-3a70a816b7ab req-69549618-068f-459e-991c-a2510fbb8808 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.025 186022 DEBUG oslo_concurrency.lockutils [req-8492a452-93f4-4149-b4b8-3a70a816b7ab req-69549618-068f-459e-991c-a2510fbb8808 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.025 186022 DEBUG oslo_concurrency.lockutils [req-8492a452-93f4-4149-b4b8-3a70a816b7ab req-69549618-068f-459e-991c-a2510fbb8808 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.025 186022 DEBUG nova.compute.manager [req-8492a452-93f4-4149-b4b8-3a70a816b7ab req-69549618-068f-459e-991c-a2510fbb8808 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Processing event network-vif-plugged-76d8404e-3237-44da-934d-3e7e8792c114 _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10808
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.027 186022 DEBUG nova.compute.manager [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Instance event wait completed in 1 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.028 186022 DEBUG nova.compute.manager [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Instance event wait completed in 0 seconds for network-vif-plugged wait_for_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:577
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.030 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.037 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.049 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Guest created on hypervisor spawn /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:4417
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.054 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Synchronizing instance power state after lifecycle event "Started"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.058 186022 INFO nova.virt.libvirt.driver [-] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Instance spawned successfully.
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.058 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.062 186022 INFO nova.virt.libvirt.driver [-] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Instance spawned successfully.
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.063 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Attempting to register defaults for the following image properties: ['hw_cdrom_bus', 'hw_disk_bus', 'hw_input_bus', 'hw_pointer_model', 'hw_video_model', 'hw_vif_model'] _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:917
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.085 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.085 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767649098.00011, 66b489b4-d427-4eb3-b712-aa91b1410874 => Paused> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.085 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] VM Paused (Lifecycle Event)
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.099 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.106 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.107 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.108 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.108 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.109 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.110 186022 DEBUG nova.virt.libvirt.driver [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.116 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.120 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Found default for hw_cdrom_bus of sata _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.121 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Found default for hw_disk_bus of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.121 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Found default for hw_input_bus of usb _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.122 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Found default for hw_pointer_model of usbtablet _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.123 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Found default for hw_video_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.123 186022 DEBUG nova.virt.libvirt.driver [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Found default for hw_vif_model of virtio _register_undefined_instance_details /usr/lib/python3.9/site-packages/nova/virt/libvirt/driver.py:946
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.131 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767649098.034879, 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.132 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] VM Resumed (Lifecycle Event)
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.172 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.182 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.203 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.204 186022 DEBUG nova.virt.driver [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] Emitting event <LifecycleEvent: 1767649098.0370476, 66b489b4-d427-4eb3-b712-aa91b1410874 => Resumed> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.204 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] VM Resumed (Lifecycle Event)
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.212 186022 INFO nova.compute.manager [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Took 7.66 seconds to spawn the instance on the hypervisor.
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.213 186022 DEBUG nova.compute.manager [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.223 186022 INFO nova.compute.manager [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Took 6.35 seconds to spawn the instance on the hypervisor.
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.224 186022 DEBUG nova.compute.manager [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.228 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.237 186022 DEBUG nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Synchronizing instance power state after lifecycle event "Resumed"; current vm_state: building, current task_state: spawning, current DB power_state: 0, VM power_state: 1 handle_lifecycle_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:1396
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.269 186022 INFO nova.compute.manager [None req-ea417b70-802e-45e0-b43e-1d5f57f89c93 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] During sync_power_state the instance has a pending task (spawning). Skip.
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.288 186022 INFO nova.compute.manager [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Took 8.12 seconds to build instance.
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.301 186022 INFO nova.compute.manager [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Took 6.84 seconds to build instance.
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.304 186022 DEBUG oslo_concurrency.lockutils [None req-aad339ad-00a8-41eb-b2c8-5e0548e74107 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 8.205s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.313 186022 DEBUG oslo_concurrency.lockutils [None req-5c914f25-d76c-42e7-927d-c20127868ddb 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "66b489b4-d427-4eb3-b712-aa91b1410874" "released" by "nova.compute.manager.ComputeManager.build_and_run_instance.<locals>._locked_do_build_and_run_instance" :: held 6.942s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.348 186022 DEBUG nova.network.neutron [req-b2d8803e-4ffb-4d7d-9d7f-c8d5d34a093e req-66074d76-bde1-4aa4-898e-138c42265179 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Updated VIF entry in instance network info cache for port 76d8404e-3237-44da-934d-3e7e8792c114. _build_network_info_model /usr/lib/python3.9/site-packages/nova/network/neutron.py:3482
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.349 186022 DEBUG nova.network.neutron [req-b2d8803e-4ffb-4d7d-9d7f-c8d5d34a093e req-66074d76-bde1-4aa4-898e-138c42265179 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Updating instance_info_cache with network_info: [{"id": "76d8404e-3237-44da-934d-3e7e8792c114", "address": "fa:16:3e:58:ee:ae", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d8404e-32", "ovs_interfaceid": "76d8404e-3237-44da-934d-3e7e8792c114", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:38:18 compute-0 nova_compute[186018]: 2026-01-05 21:38:18.362 186022 DEBUG oslo_concurrency.lockutils [req-b2d8803e-4ffb-4d7d-9d7f-c8d5d34a093e req-66074d76-bde1-4aa4-898e-138c42265179 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Releasing lock "refresh_cache-66b489b4-d427-4eb3-b712-aa91b1410874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:38:18 compute-0 podman[255275]: 2026-01-05 21:38:18.778642681 +0000 UTC m=+0.118063980 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_compute)
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.257 186022 DEBUG nova.compute.manager [req-e6c71758-6a98-4c6e-ae69-23ab8b0ba737 req-ef09e8d0-05ec-4882-9687-dc59f4e1f708 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Received event network-vif-plugged-76d8404e-3237-44da-934d-3e7e8792c114 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.258 186022 DEBUG oslo_concurrency.lockutils [req-e6c71758-6a98-4c6e-ae69-23ab8b0ba737 req-ef09e8d0-05ec-4882-9687-dc59f4e1f708 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.258 186022 DEBUG oslo_concurrency.lockutils [req-e6c71758-6a98-4c6e-ae69-23ab8b0ba737 req-ef09e8d0-05ec-4882-9687-dc59f4e1f708 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.258 186022 DEBUG oslo_concurrency.lockutils [req-e6c71758-6a98-4c6e-ae69-23ab8b0ba737 req-ef09e8d0-05ec-4882-9687-dc59f4e1f708 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.259 186022 DEBUG nova.compute.manager [req-e6c71758-6a98-4c6e-ae69-23ab8b0ba737 req-ef09e8d0-05ec-4882-9687-dc59f4e1f708 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] No waiting events found dispatching network-vif-plugged-76d8404e-3237-44da-934d-3e7e8792c114 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.259 186022 WARNING nova.compute.manager [req-e6c71758-6a98-4c6e-ae69-23ab8b0ba737 req-ef09e8d0-05ec-4882-9687-dc59f4e1f708 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Received unexpected event network-vif-plugged-76d8404e-3237-44da-934d-3e7e8792c114 for instance with vm_state active and task_state None.
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.764 186022 DEBUG oslo_concurrency.lockutils [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.764 186022 DEBUG oslo_concurrency.lockutils [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.764 186022 DEBUG oslo_concurrency.lockutils [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.765 186022 DEBUG oslo_concurrency.lockutils [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.765 186022 DEBUG oslo_concurrency.lockutils [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.766 186022 INFO nova.compute.manager [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Terminating instance
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.767 186022 DEBUG nova.compute.manager [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 21:38:20 compute-0 kernel: tap64342629-0b (unregistering): left promiscuous mode
Jan 05 21:38:20 compute-0 NetworkManager[56598]: <info>  [1767649100.7996] device (tap64342629-0b): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.811 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:20 compute-0 ovn_controller[98229]: 2026-01-05T21:38:20Z|00171|binding|INFO|Releasing lport 64342629-0b04-40fb-a867-9404e7421cc7 from this chassis (sb_readonly=0)
Jan 05 21:38:20 compute-0 ovn_controller[98229]: 2026-01-05T21:38:20Z|00172|binding|INFO|Setting lport 64342629-0b04-40fb-a867-9404e7421cc7 down in Southbound
Jan 05 21:38:20 compute-0 ovn_controller[98229]: 2026-01-05T21:38:20Z|00173|binding|INFO|Removing iface tap64342629-0b ovn-installed in OVS
Jan 05 21:38:20 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:20.834 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:0c:5e:3e 10.100.0.25'], port_security=['fa:16:3e:0c:5e:3e 10.100.0.25'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.25/16', 'neutron:device_id': '4bc1b97d-0c3d-4616-af67-f8b9ffc067f0', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d77496083304392a3bddf3b3cc09d6f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6045589-62d6-4436-a4e5-3eada182f76e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5730d3f-9ce0-49ab-a945-1714805ce7f9, chassis=[], tunnel_key=3, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=64342629-0b04-40fb-a867-9404e7421cc7) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:38:20 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:20.835 107689 INFO neutron.agent.ovn.metadata.agent [-] Port 64342629-0b04-40fb-a867-9404e7421cc7 in datapath cfd3046a-c974-4a8e-be8e-0c5c965904ab unbound from our chassis
Jan 05 21:38:20 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:20.837 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfd3046a-c974-4a8e-be8e-0c5c965904ab
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.838 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.845 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:20 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:20.859 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[ec73cc40-2d96-48cb-aa3a-506911e6fad0]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:20 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Deactivated successfully.
Jan 05 21:38:20 compute-0 systemd[1]: machine-qemu\x2d14\x2dinstance\x2d0000000d.scope: Consumed 3.345s CPU time.
Jan 05 21:38:20 compute-0 systemd-machined[157312]: Machine qemu-14-instance-0000000d terminated.
Jan 05 21:38:20 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:20.885 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[8057dddc-e9ac-458c-a3c0-3273c60ce3f9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:20 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:20.890 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[b8bfa38c-2cd6-4b18-b6a8-ed08dab7fcff]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:20 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:20.917 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[5fdfa26a-0500-4e30-bc2d-9fe035fcb6a3]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:20 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:20.934 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[4a89688c-66aa-4074-a874-ad0a43c24d76]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfd3046a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:25:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 8, 'tx_packets': 9, 'rx_bytes': 616, 'tx_bytes': 522, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556128, 'reachable_time': 16036, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 255306, 'error': None, 'target': 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:20 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:20.954 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[29e4772b-444d-4f96-ab91-f988f9dac16c]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfd3046a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 556145, 'tstamp': 556145}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255307, 'error': None, 'target': 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapcfd3046a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 556148, 'tstamp': 556148}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 255307, 'error': None, 'target': 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:38:20 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:20.956 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfd3046a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.958 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:20 compute-0 nova_compute[186018]: 2026-01-05 21:38:20.964 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:20 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:20.965 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfd3046a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:20 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:20.966 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:38:20 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:20.966 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfd3046a-c0, col_values=(('external_ids', {'iface-id': '68b7e7cf-3a36-4106-85be-cc39d85ff653'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:20 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:20.967 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.035 186022 INFO nova.virt.libvirt.driver [-] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Instance destroyed successfully.
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.036 186022 DEBUG nova.objects.instance [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lazy-loading 'resources' on Instance uuid 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.061 186022 DEBUG nova.virt.libvirt.vif [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:38:08Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-6530778-asg-yb4g67iwlud7-wtpz2iwsyvrj-fzsk7hoskpni',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6530778-asg-yb4g67iwlud7-wtpz2iwsyvrj-fzsk7hoskpni',id=13,image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-05T21:38:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='592ac083-4e5e-4ede-94dc-941b228764d4'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d77496083304392a3bddf3b3cc09d6f',ramdisk_id='',reservation_id='r-b6z8lowl',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1091853177',owner_user_name='tempest-PrometheusGabbiTest-1091853177-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T21:38:18Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='4adc8921daaf44d4b88d43bd5764da44',uuid=4bc1b97d-0c3d-4616-af67-f8b9ffc067f0,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "64342629-0b04-40fb-a867-9404e7421cc7", "address": "fa:16:3e:0c:5e:3e", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64342629-0b", "ovs_interfaceid": "64342629-0b04-40fb-a867-9404e7421cc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.062 186022 DEBUG nova.network.os_vif_util [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converting VIF {"id": "64342629-0b04-40fb-a867-9404e7421cc7", "address": "fa:16:3e:0c:5e:3e", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.25", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap64342629-0b", "ovs_interfaceid": "64342629-0b04-40fb-a867-9404e7421cc7", "qbh_params": null, "qbg_params": null, "active": false, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.062 186022 DEBUG nova.network.os_vif_util [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converted object VIFOpenVSwitch(active=False,address=fa:16:3e:0c:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=64342629-0b04-40fb-a867-9404e7421cc7,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64342629-0b') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.063 186022 DEBUG os_vif [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Unplugging vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=64342629-0b04-40fb-a867-9404e7421cc7,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64342629-0b') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.065 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.065 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap64342629-0b, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.067 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.068 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.071 186022 INFO os_vif [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=False,address=fa:16:3e:0c:5e:3e,bridge_name='br-int',has_traffic_filtering=True,id=64342629-0b04-40fb-a867-9404e7421cc7,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap64342629-0b')
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.071 186022 INFO nova.virt.libvirt.driver [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Deleting instance files /var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0_del
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.072 186022 INFO nova.virt.libvirt.driver [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Deletion of /var/lib/nova/instances/4bc1b97d-0c3d-4616-af67-f8b9ffc067f0_del complete
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.129 186022 INFO nova.compute.manager [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Took 0.36 seconds to destroy the instance on the hypervisor.
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.130 186022 DEBUG oslo.service.loopingcall [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.130 186022 DEBUG nova.compute.manager [-] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.130 186022 DEBUG nova.network.neutron [-] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.690 186022 DEBUG nova.network.neutron [-] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.708 186022 INFO nova.compute.manager [-] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Took 0.58 seconds to deallocate network for instance.
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.745 186022 DEBUG oslo_concurrency.lockutils [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.745 186022 DEBUG oslo_concurrency.lockutils [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.789 186022 DEBUG nova.compute.manager [req-b3155b40-13c4-4717-bf56-e29bbd4e933b req-c92a0f46-067b-4cd8-9576-adf54ccfa5bd 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Received event network-vif-deleted-64342629-0b04-40fb-a867-9404e7421cc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.844 186022 DEBUG nova.compute.provider_tree [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.856 186022 DEBUG nova.scheduler.client.report [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.871 186022 DEBUG oslo_concurrency.lockutils [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.126s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.893 186022 INFO nova.scheduler.client.report [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Deleted allocations for instance 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0
Jan 05 21:38:21 compute-0 nova_compute[186018]: 2026-01-05 21:38:21.955 186022 DEBUG oslo_concurrency.lockutils [None req-bb61e249-effc-4262-808c-c3dde9d16869 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 1.191s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:22 compute-0 nova_compute[186018]: 2026-01-05 21:38:22.330 186022 DEBUG nova.compute.manager [req-6311a929-325b-401a-98fa-7ac1f9024ea9 req-9accbb2a-6776-4655-b54d-5651dd93e0b0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Received event network-vif-unplugged-64342629-0b04-40fb-a867-9404e7421cc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:38:22 compute-0 nova_compute[186018]: 2026-01-05 21:38:22.331 186022 DEBUG oslo_concurrency.lockutils [req-6311a929-325b-401a-98fa-7ac1f9024ea9 req-9accbb2a-6776-4655-b54d-5651dd93e0b0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:22 compute-0 nova_compute[186018]: 2026-01-05 21:38:22.332 186022 DEBUG oslo_concurrency.lockutils [req-6311a929-325b-401a-98fa-7ac1f9024ea9 req-9accbb2a-6776-4655-b54d-5651dd93e0b0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:22 compute-0 nova_compute[186018]: 2026-01-05 21:38:22.332 186022 DEBUG oslo_concurrency.lockutils [req-6311a929-325b-401a-98fa-7ac1f9024ea9 req-9accbb2a-6776-4655-b54d-5651dd93e0b0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:22 compute-0 nova_compute[186018]: 2026-01-05 21:38:22.333 186022 DEBUG nova.compute.manager [req-6311a929-325b-401a-98fa-7ac1f9024ea9 req-9accbb2a-6776-4655-b54d-5651dd93e0b0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] No waiting events found dispatching network-vif-unplugged-64342629-0b04-40fb-a867-9404e7421cc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:38:22 compute-0 nova_compute[186018]: 2026-01-05 21:38:22.333 186022 WARNING nova.compute.manager [req-6311a929-325b-401a-98fa-7ac1f9024ea9 req-9accbb2a-6776-4655-b54d-5651dd93e0b0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Received unexpected event network-vif-unplugged-64342629-0b04-40fb-a867-9404e7421cc7 for instance with vm_state deleted and task_state None.
Jan 05 21:38:22 compute-0 nova_compute[186018]: 2026-01-05 21:38:22.333 186022 DEBUG nova.compute.manager [req-6311a929-325b-401a-98fa-7ac1f9024ea9 req-9accbb2a-6776-4655-b54d-5651dd93e0b0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Received event network-vif-plugged-64342629-0b04-40fb-a867-9404e7421cc7 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:38:22 compute-0 nova_compute[186018]: 2026-01-05 21:38:22.334 186022 DEBUG oslo_concurrency.lockutils [req-6311a929-325b-401a-98fa-7ac1f9024ea9 req-9accbb2a-6776-4655-b54d-5651dd93e0b0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:22 compute-0 nova_compute[186018]: 2026-01-05 21:38:22.334 186022 DEBUG oslo_concurrency.lockutils [req-6311a929-325b-401a-98fa-7ac1f9024ea9 req-9accbb2a-6776-4655-b54d-5651dd93e0b0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:22 compute-0 nova_compute[186018]: 2026-01-05 21:38:22.335 186022 DEBUG oslo_concurrency.lockutils [req-6311a929-325b-401a-98fa-7ac1f9024ea9 req-9accbb2a-6776-4655-b54d-5651dd93e0b0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "4bc1b97d-0c3d-4616-af67-f8b9ffc067f0-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:22 compute-0 nova_compute[186018]: 2026-01-05 21:38:22.335 186022 DEBUG nova.compute.manager [req-6311a929-325b-401a-98fa-7ac1f9024ea9 req-9accbb2a-6776-4655-b54d-5651dd93e0b0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] No waiting events found dispatching network-vif-plugged-64342629-0b04-40fb-a867-9404e7421cc7 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:38:22 compute-0 nova_compute[186018]: 2026-01-05 21:38:22.335 186022 WARNING nova.compute.manager [req-6311a929-325b-401a-98fa-7ac1f9024ea9 req-9accbb2a-6776-4655-b54d-5651dd93e0b0 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Received unexpected event network-vif-plugged-64342629-0b04-40fb-a867-9404e7421cc7 for instance with vm_state deleted and task_state None.
Jan 05 21:38:23 compute-0 nova_compute[186018]: 2026-01-05 21:38:23.102 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:26 compute-0 nova_compute[186018]: 2026-01-05 21:38:26.067 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:27 compute-0 nova_compute[186018]: 2026-01-05 21:38:27.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:38:27 compute-0 nova_compute[186018]: 2026-01-05 21:38:27.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:38:27 compute-0 podman[255327]: 2026-01-05 21:38:27.79181116 +0000 UTC m=+0.125743062 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, name=ubi9-minimal, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=openstack_network_exporter, distribution-scope=public, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, release=1755695350, io.openshift.tags=minimal rhel9, container_name=openstack_network_exporter)
Jan 05 21:38:27 compute-0 podman[255326]: 2026-01-05 21:38:27.815832002 +0000 UTC m=+0.152143426 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 05 21:38:28 compute-0 nova_compute[186018]: 2026-01-05 21:38:28.105 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:29 compute-0 podman[202426]: time="2026-01-05T21:38:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:38:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:38:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:38:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:38:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4831 "" "Go-http-client/1.1"
Jan 05 21:38:30 compute-0 nova_compute[186018]: 2026-01-05 21:38:30.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:38:30 compute-0 nova_compute[186018]: 2026-01-05 21:38:30.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:38:30 compute-0 nova_compute[186018]: 2026-01-05 21:38:30.721 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:38:30 compute-0 nova_compute[186018]: 2026-01-05 21:38:30.722 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:38:30 compute-0 nova_compute[186018]: 2026-01-05 21:38:30.722 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:38:31 compute-0 nova_compute[186018]: 2026-01-05 21:38:31.069 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:31 compute-0 openstack_network_exporter[205720]: ERROR   21:38:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:38:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:38:31 compute-0 openstack_network_exporter[205720]: ERROR   21:38:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:38:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:38:31 compute-0 podman[255370]: 2026-01-05 21:38:31.732415677 +0000 UTC m=+0.079655887 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 05 21:38:31 compute-0 podman[255371]: 2026-01-05 21:38:31.749603 +0000 UTC m=+0.093049921 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:38:32 compute-0 nova_compute[186018]: 2026-01-05 21:38:32.743 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Updating instance_info_cache with network_info: [{"id": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "address": "fa:16:3e:f6:00:12", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd05ce4e7-0f", "ovs_interfaceid": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:38:32 compute-0 nova_compute[186018]: 2026-01-05 21:38:32.788 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:38:32 compute-0 nova_compute[186018]: 2026-01-05 21:38:32.789 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:38:32 compute-0 nova_compute[186018]: 2026-01-05 21:38:32.789 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:38:32 compute-0 nova_compute[186018]: 2026-01-05 21:38:32.880 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:32 compute-0 nova_compute[186018]: 2026-01-05 21:38:32.881 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:32 compute-0 nova_compute[186018]: 2026-01-05 21:38:32.881 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:32 compute-0 nova_compute[186018]: 2026-01-05 21:38:32.881 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:38:32 compute-0 nova_compute[186018]: 2026-01-05 21:38:32.962 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.025 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.026 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.086 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.093 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.112 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.177 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.084s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.182 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.260 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.268 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.329 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.331 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.405 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.769 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.772 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4679MB free_disk=72.28580856323242GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.772 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.773 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.859 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.859 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance fe15eddf-ceea-4584-95df-dc1ea54e3c25 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.860 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 66b489b4-d427-4eb3-b712-aa91b1410874 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.860 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.861 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.948 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:38:33 compute-0 nova_compute[186018]: 2026-01-05 21:38:33.997 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:38:34 compute-0 nova_compute[186018]: 2026-01-05 21:38:34.022 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:38:34 compute-0 nova_compute[186018]: 2026-01-05 21:38:34.022 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.250s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:35 compute-0 nova_compute[186018]: 2026-01-05 21:38:35.017 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:38:35 compute-0 nova_compute[186018]: 2026-01-05 21:38:35.018 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:38:35 compute-0 nova_compute[186018]: 2026-01-05 21:38:35.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:38:35 compute-0 nova_compute[186018]: 2026-01-05 21:38:35.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:38:36 compute-0 nova_compute[186018]: 2026-01-05 21:38:36.032 186022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767649101.030646, 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:38:36 compute-0 nova_compute[186018]: 2026-01-05 21:38:36.033 186022 INFO nova.compute.manager [-] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] VM Stopped (Lifecycle Event)
Jan 05 21:38:36 compute-0 nova_compute[186018]: 2026-01-05 21:38:36.071 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:36 compute-0 nova_compute[186018]: 2026-01-05 21:38:36.073 186022 DEBUG nova.compute.manager [None req-7807bd07-0d96-4cf4-8049-bb17dbdf0447 - - - - - -] [instance: 4bc1b97d-0c3d-4616-af67-f8b9ffc067f0] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:38:36 compute-0 nova_compute[186018]: 2026-01-05 21:38:36.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:38:38 compute-0 nova_compute[186018]: 2026-01-05 21:38:38.111 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:38 compute-0 podman[255429]: 2026-01-05 21:38:38.785560849 +0000 UTC m=+0.120198968 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:38:40 compute-0 nova_compute[186018]: 2026-01-05 21:38:40.456 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:38:40 compute-0 nova_compute[186018]: 2026-01-05 21:38:40.491 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:38:41 compute-0 nova_compute[186018]: 2026-01-05 21:38:41.073 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:42.876 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:38:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:42.877 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:38:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:38:42.877 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:38:43 compute-0 nova_compute[186018]: 2026-01-05 21:38:43.113 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:45 compute-0 podman[255452]: 2026-01-05 21:38:45.763125229 +0000 UTC m=+0.119994381 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, architecture=x86_64, com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., config_id=kepler, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, build-date=2024-09-18T21:23:30, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, release=1214.1726694543, name=ubi9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.openshift.expose-services=, release-0.7.12=, version=9.4, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.tags=base rhel9)
Jan 05 21:38:45 compute-0 podman[255453]: 2026-01-05 21:38:45.783859277 +0000 UTC m=+0.126820147 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 05 21:38:46 compute-0 nova_compute[186018]: 2026-01-05 21:38:46.075 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:48 compute-0 nova_compute[186018]: 2026-01-05 21:38:48.116 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:49 compute-0 podman[255492]: 2026-01-05 21:38:49.750923312 +0000 UTC m=+0.098551840 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, managed_by=edpm_ansible, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, io.buildah.version=1.41.4)
Jan 05 21:38:51 compute-0 nova_compute[186018]: 2026-01-05 21:38:51.077 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:51 compute-0 ovn_controller[98229]: 2026-01-05T21:38:51Z|00021|pinctrl(ovn_pinctrl0)|INFO|DHCPOFFER fa:16:3e:58:ee:ae 10.100.2.244
Jan 05 21:38:51 compute-0 ovn_controller[98229]: 2026-01-05T21:38:51Z|00022|pinctrl(ovn_pinctrl0)|INFO|DHCPACK fa:16:3e:58:ee:ae 10.100.2.244
Jan 05 21:38:51 compute-0 ovn_controller[98229]: 2026-01-05T21:38:51Z|00174|memory_trim|INFO|Detected inactivity (last active 30018 ms ago): trimming memory
Jan 05 21:38:53 compute-0 nova_compute[186018]: 2026-01-05 21:38:53.119 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:56 compute-0 nova_compute[186018]: 2026-01-05 21:38:56.082 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:58 compute-0 nova_compute[186018]: 2026-01-05 21:38:58.126 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:38:58 compute-0 sshd-session[255519]: Invalid user user from 78.128.112.74 port 60990
Jan 05 21:38:58 compute-0 podman[255522]: 2026-01-05 21:38:58.477850286 +0000 UTC m=+0.099031216 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, distribution-scope=public, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., architecture=x86_64, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, version=9.6, name=ubi9-minimal, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, io.openshift.tags=minimal rhel9)
Jan 05 21:38:58 compute-0 podman[255521]: 2026-01-05 21:38:58.478102874 +0000 UTC m=+0.103312862 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 05 21:38:58 compute-0 sshd-session[255519]: Connection closed by invalid user user 78.128.112.74 port 60990 [preauth]
Jan 05 21:38:59 compute-0 podman[202426]: time="2026-01-05T21:38:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:38:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:38:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:38:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:38:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4839 "" "Go-http-client/1.1"
Jan 05 21:39:01 compute-0 nova_compute[186018]: 2026-01-05 21:39:01.086 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:01 compute-0 openstack_network_exporter[205720]: ERROR   21:39:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:39:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:39:01 compute-0 openstack_network_exporter[205720]: ERROR   21:39:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:39:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:39:02 compute-0 podman[255564]: 2026-01-05 21:39:02.718278391 +0000 UTC m=+0.065478410 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent)
Jan 05 21:39:02 compute-0 podman[255565]: 2026-01-05 21:39:02.753048925 +0000 UTC m=+0.097251049 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:39:03 compute-0 nova_compute[186018]: 2026-01-05 21:39:03.125 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:06 compute-0 nova_compute[186018]: 2026-01-05 21:39:06.090 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.790 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.791 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.797 14 DEBUG ceilometer.compute.discovery [-] Querying metadata for instance 66b489b4-d427-4eb3-b712-aa91b1410874 from Nova API get_server /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:176
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.798 14 DEBUG novaclient.v2.client [-] REQ: curl -g -i -X GET https://nova-internal.openstack.svc:8774/v2.1/servers/66b489b4-d427-4eb3-b712-aa91b1410874 -H "Accept: application/json" -H "User-Agent: python-novaclient" -H "X-Auth-Token: {SHA256}f276ecb8e60cef1797549a0d2bcc21ef3546f9ad65f5da0e31c0a93bf2cbb910" -H "X-OpenStack-Nova-API-Version: 2.1" _http_log_request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:572
Jan 05 21:39:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:07.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:39:08 compute-0 nova_compute[186018]: 2026-01-05 21:39:08.127 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.908 14 DEBUG novaclient.v2.client [-] RESP: [200] Connection: Keep-Alive Content-Length: 1832 Content-Type: application/json Date: Mon, 05 Jan 2026 21:39:07 GMT Keep-Alive: timeout=5, max=100 OpenStack-API-Version: compute 2.1 Server: Apache Vary: OpenStack-API-Version,X-OpenStack-Nova-API-Version X-OpenStack-Nova-API-Version: 2.1 x-compute-request-id: req-920c00fc-d618-457d-8452-cdbf22d6db24 x-openstack-request-id: req-920c00fc-d618-457d-8452-cdbf22d6db24 _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:613
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.909 14 DEBUG novaclient.v2.client [-] RESP BODY: {"server": {"id": "66b489b4-d427-4eb3-b712-aa91b1410874", "name": "te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut", "status": "ACTIVE", "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "user_id": "4adc8921daaf44d4b88d43bd5764da44", "metadata": {"metering.server_group": "592ac083-4e5e-4ede-94dc-941b228764d4"}, "hostId": "3ca26c7ed0445332f9f9d5b660e6197db7ba063b9bde1e989d152df8", "image": {"id": "be6cfe06-61ed-4c76-8e1d-bc9df6929005", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/images/be6cfe06-61ed-4c76-8e1d-bc9df6929005"}]}, "flavor": {"id": "ce1138a2-4b82-4664-8860-711a956c0882", "links": [{"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/flavors/ce1138a2-4b82-4664-8860-711a956c0882"}]}, "created": "2026-01-05T21:38:10Z", "updated": "2026-01-05T21:38:18Z", "addresses": {"": [{"version": 4, "addr": "10.100.2.244", "OS-EXT-IPS:type": "fixed", "OS-EXT-IPS-MAC:mac_addr": "fa:16:3e:58:ee:ae"}]}, "accessIPv4": "", "accessIPv6": "", "links": [{"rel": "self", "href": "https://nova-internal.openstack.svc:8774/v2.1/servers/66b489b4-d427-4eb3-b712-aa91b1410874"}, {"rel": "bookmark", "href": "https://nova-internal.openstack.svc:8774/servers/66b489b4-d427-4eb3-b712-aa91b1410874"}], "OS-DCF:diskConfig": "MANUAL", "progress": 0, "OS-EXT-AZ:availability_zone": "nova", "config_drive": "True", "key_name": null, "OS-SRV-USG:launched_at": "2026-01-05T21:38:18.000000", "OS-SRV-USG:terminated_at": null, "security_groups": [{"name": "default"}], "OS-EXT-SRV-ATTR:host": "compute-0.ctlplane.example.com", "OS-EXT-SRV-ATTR:instance_name": "instance-0000000e", "OS-EXT-SRV-ATTR:hypervisor_hostname": "compute-0.ctlplane.example.com", "OS-EXT-STS:task_state": null, "OS-EXT-STS:vm_state": "active", "OS-EXT-STS:power_state": 1, "os-extended-volumes:volumes_attached": []}} _http_log_response /usr/lib/python3.12/site-packages/keystoneauth1/session.py:648
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.909 14 DEBUG novaclient.v2.client [-] GET call to compute for https://nova-internal.openstack.svc:8774/v2.1/servers/66b489b4-d427-4eb3-b712-aa91b1410874 used request id req-920c00fc-d618-457d-8452-cdbf22d6db24 request /usr/lib/python3.12/site-packages/keystoneauth1/session.py:1073
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.910 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '66b489b4-d427-4eb3-b712-aa91b1410874', 'name': 'te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'be6cfe06-61ed-4c76-8e1d-bc9df6929005'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '0d77496083304392a3bddf3b3cc09d6f', 'user_id': '4adc8921daaf44d4b88d43bd5764da44', 'hostId': '3ca26c7ed0445332f9f9d5b660e6197db7ba063b9bde1e989d152df8', 'status': 'active', 'metadata': {'metering.server_group': '592ac083-4e5e-4ede-94dc-941b228764d4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.914 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '62f57876-af2d-4771-bffd-c87b7755cc5c', 'name': 'tempest-AttachInterfacesUnderV243Test-server-306597775', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e0899289c7dd4631b4fa69150a914123', 'user_id': '168ad639a6ed41c8bd954c434807ef6c', 'hostId': 'c3f8712f401137fbbdc6483d36c041bcfcf3dfa8c8dce0a58aba2f1b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.917 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fe15eddf-ceea-4584-95df-dc1ea54e3c25', 'name': 'te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'be6cfe06-61ed-4c76-8e1d-bc9df6929005'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '0d77496083304392a3bddf3b3cc09d6f', 'user_id': '4adc8921daaf44d4b88d43bd5764da44', 'hostId': '3ca26c7ed0445332f9f9d5b660e6197db7ba063b9bde1e989d152df8', 'status': 'active', 'metadata': {'metering.server_group': '592ac083-4e5e-4ede-94dc-941b228764d4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.918 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.918 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.918 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.918 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.919 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.919 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.920 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.920 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.920 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.920 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.920 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:39:08.918510) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.921 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:39:08.920403) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.927 14 DEBUG ceilometer.compute.virt.libvirt.inspector [-] No delta meter predecessor for 66b489b4-d427-4eb3-b712-aa91b1410874 / tap76d8404e-32 inspect_vnics /usr/lib/python3.12/site-packages/ceilometer/compute/virt/libvirt/inspector.py:143
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.927 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.packets volume: 12 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.934 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.941 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.942 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.942 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.943 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.943 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.943 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.943 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.944 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.944 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.945 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.946 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.947 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.947 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.947 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.947 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.948 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.948 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.949 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.949 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.950 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:39:08.943890) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.950 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:39:08.948087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.950 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.951 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.951 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.951 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.951 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.952 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:39:08.951650) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.953 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.953 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.954 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.954 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.954 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.954 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.955 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.955 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.956 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.957 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.957 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.957 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:39:08.954557) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.957 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.958 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.958 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.958 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.959 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.960 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.961 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:39:08.957953) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.961 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.rate in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.961 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.961 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.962 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.962 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for OutgoingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.962 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.outgoing.bytes.rate from polling [<NovaLikeServer: te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut>]
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.963 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.963 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.963 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.963 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.964 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.964 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.964 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.965 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.965 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.966 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.966 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.966 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.966 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.966 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.967 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.967 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.968 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.969 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.969 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.969 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.970 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.970 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.970 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.rate (2026-01-05T21:39:08.962112) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:39:08.964058) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:39:08.966905) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.975 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:39:08.970641) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.987 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.allocation volume: 30023680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:08.987 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.011 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.012 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.033 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.033 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.034 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.035 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.035 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.035 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.035 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.035 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.036 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.036 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.037 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.bytes.delta volume: 336 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.037 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.038 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.038 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.rate in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.039 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.039 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:39:09.035915) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.039 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.039 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.rate heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.040 14 DEBUG ceilometer.compute.pollsters [-] LibvirtInspector does not provide data for IncomingBytesRatePollster get_samples /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:162
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.040 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.rate (2026-01-05T21:39:09.039425) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.040 14 ERROR ceilometer.polling.manager [-] Prevent pollster network.incoming.bytes.rate from polling [<NovaLikeServer: te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut>] on source pollsters anymore!: ceilometer.polling.plugin_base.PollsterPermanentError: [<NovaLikeServer: te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut>]
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.040 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.040 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.041 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.041 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.041 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.041 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.042 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.042 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.043 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.043 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.043 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.044 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.044 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.044 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:39:09.041447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.046 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:39:09.044406) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.072 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/memory.usage volume: 43.453125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.097 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/memory.usage volume: 42.60546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.121 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/memory.usage volume: 43.72265625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.122 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.122 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.123 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.123 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.123 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.123 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.124 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:39:09.123792) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.124 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.125 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.125 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.126 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.126 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.126 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.127 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.127 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.127 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.127 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.128 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.129 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.129 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.130 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.131 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.132 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.132 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.132 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.132 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.133 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.133 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:39:09.127598) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.133 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.134 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:39:09.133894) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.169 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.170 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.239 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 31029760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.239 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.283 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.bytes volume: 29568000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.284 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.285 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.285 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.285 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.285 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.285 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.286 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.286 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.bytes volume: 1430 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.286 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:39:09.286093) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.287 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes volume: 4311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.288 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.bytes volume: 1688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.288 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.288 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.289 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.289 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.289 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.289 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.289 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.latency volume: 470547540 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.290 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.latency volume: 52877300 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.290 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 519177861 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.291 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 51692234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.291 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.latency volume: 575714939 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.292 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.latency volume: 64092754 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.293 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.293 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.293 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.293 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.293 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.294 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.294 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.294 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.295 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 1138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.295 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:39:09.289634) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.296 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.296 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:39:09.294071) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.296 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.requests volume: 1061 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.296 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.297 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.297 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.298 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.299 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.299 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.300 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.300 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.usage volume: 29818880 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.300 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.301 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.301 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.301 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:39:09.299920) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.302 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.302 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.303 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.303 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.303 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.303 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.303 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.303 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.303 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.bytes volume: 72781824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.304 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.304 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 73068544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.304 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.304 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.bytes volume: 72863744 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.305 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.305 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.305 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.306 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.306 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.306 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.306 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.306 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/cpu volume: 49350000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.306 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/cpu volume: 39740000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.307 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/cpu volume: 324900000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.307 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.307 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.307 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.307 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.308 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.308 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.308 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.latency volume: 2668784700 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.308 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.308 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 13557622904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.309 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.309 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.latency volume: 3874481687 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.309 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.310 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.310 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.310 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.310 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.310 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.310 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.310 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.requests volume: 306 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.311 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.311 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:39:09.303764) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.311 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:39:09.306389) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.312 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.requests volume: 314 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:39:09.308097) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.312 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:39:09.310774) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.312 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.313 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.313 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.314 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:39:09.315 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:39:09 compute-0 podman[255604]: 2026-01-05 21:39:09.722686473 +0000 UTC m=+0.077065848 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:39:11 compute-0 nova_compute[186018]: 2026-01-05 21:39:11.093 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:13 compute-0 nova_compute[186018]: 2026-01-05 21:39:13.131 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:16 compute-0 nova_compute[186018]: 2026-01-05 21:39:16.096 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:16 compute-0 podman[255633]: 2026-01-05 21:39:16.77601478 +0000 UTC m=+0.114098084 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, org.label-schema.build-date=20251202, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Jan 05 21:39:16 compute-0 podman[255632]: 2026-01-05 21:39:16.776578987 +0000 UTC m=+0.117055787 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, version=9.4, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, managed_by=edpm_ansible, name=ubi9, config_id=kepler, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., release=1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, release-0.7.12=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, com.redhat.component=ubi9-container)
Jan 05 21:39:18 compute-0 nova_compute[186018]: 2026-01-05 21:39:18.134 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:20 compute-0 podman[255672]: 2026-01-05 21:39:20.713739004 +0000 UTC m=+0.071346026 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS)
Jan 05 21:39:21 compute-0 nova_compute[186018]: 2026-01-05 21:39:21.102 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:23 compute-0 nova_compute[186018]: 2026-01-05 21:39:23.137 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:26 compute-0 nova_compute[186018]: 2026-01-05 21:39:26.110 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:28 compute-0 nova_compute[186018]: 2026-01-05 21:39:28.140 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:28 compute-0 nova_compute[186018]: 2026-01-05 21:39:28.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:39:28 compute-0 nova_compute[186018]: 2026-01-05 21:39:28.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:39:28 compute-0 podman[255694]: 2026-01-05 21:39:28.775850519 +0000 UTC m=+0.107914077 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, io.openshift.expose-services=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, version=9.6, release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., com.redhat.component=ubi9-minimal-container, architecture=x86_64, config_id=openstack_network_exporter, distribution-scope=public, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter)
Jan 05 21:39:28 compute-0 podman[255693]: 2026-01-05 21:39:28.814654521 +0000 UTC m=+0.154879308 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2)
Jan 05 21:39:29 compute-0 podman[202426]: time="2026-01-05T21:39:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:39:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:39:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:39:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:39:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4838 "" "Go-http-client/1.1"
Jan 05 21:39:30 compute-0 nova_compute[186018]: 2026-01-05 21:39:30.463 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:39:30 compute-0 nova_compute[186018]: 2026-01-05 21:39:30.464 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:39:30 compute-0 nova_compute[186018]: 2026-01-05 21:39:30.465 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:39:30 compute-0 nova_compute[186018]: 2026-01-05 21:39:30.750 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:39:30 compute-0 nova_compute[186018]: 2026-01-05 21:39:30.756 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:39:30 compute-0 nova_compute[186018]: 2026-01-05 21:39:30.757 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:39:30 compute-0 nova_compute[186018]: 2026-01-05 21:39:30.758 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62f57876-af2d-4771-bffd-c87b7755cc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:39:31 compute-0 nova_compute[186018]: 2026-01-05 21:39:31.114 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:31 compute-0 openstack_network_exporter[205720]: ERROR   21:39:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:39:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:39:31 compute-0 openstack_network_exporter[205720]: ERROR   21:39:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:39:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:39:33 compute-0 nova_compute[186018]: 2026-01-05 21:39:33.143 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:33 compute-0 podman[255735]: 2026-01-05 21:39:33.713876023 +0000 UTC m=+0.069906781 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 05 21:39:33 compute-0 podman[255736]: 2026-01-05 21:39:33.735790988 +0000 UTC m=+0.084798423 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.379 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updating instance_info_cache with network_info: [{"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.398 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.399 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.400 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.422 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.422 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.423 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.423 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.501 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.597 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.096s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.598 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.659 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.669 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.747 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.079s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.748 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.817 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.824 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.882 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.884 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:39:34 compute-0 nova_compute[186018]: 2026-01-05 21:39:34.950 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:39:35 compute-0 nova_compute[186018]: 2026-01-05 21:39:35.285 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:39:35 compute-0 nova_compute[186018]: 2026-01-05 21:39:35.287 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4700MB free_disk=72.25796127319336GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:39:35 compute-0 nova_compute[186018]: 2026-01-05 21:39:35.287 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:39:35 compute-0 nova_compute[186018]: 2026-01-05 21:39:35.288 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:39:35 compute-0 nova_compute[186018]: 2026-01-05 21:39:35.388 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:39:35 compute-0 nova_compute[186018]: 2026-01-05 21:39:35.388 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance fe15eddf-ceea-4584-95df-dc1ea54e3c25 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:39:35 compute-0 nova_compute[186018]: 2026-01-05 21:39:35.389 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 66b489b4-d427-4eb3-b712-aa91b1410874 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:39:35 compute-0 nova_compute[186018]: 2026-01-05 21:39:35.389 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:39:35 compute-0 nova_compute[186018]: 2026-01-05 21:39:35.390 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:39:35 compute-0 nova_compute[186018]: 2026-01-05 21:39:35.468 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:39:35 compute-0 nova_compute[186018]: 2026-01-05 21:39:35.483 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:39:35 compute-0 nova_compute[186018]: 2026-01-05 21:39:35.486 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:39:35 compute-0 nova_compute[186018]: 2026-01-05 21:39:35.486 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.198s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:39:36 compute-0 nova_compute[186018]: 2026-01-05 21:39:36.118 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:36 compute-0 nova_compute[186018]: 2026-01-05 21:39:36.547 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:39:36 compute-0 nova_compute[186018]: 2026-01-05 21:39:36.549 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:39:36 compute-0 nova_compute[186018]: 2026-01-05 21:39:36.550 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:39:37 compute-0 nova_compute[186018]: 2026-01-05 21:39:37.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:39:38 compute-0 nova_compute[186018]: 2026-01-05 21:39:38.146 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:38 compute-0 nova_compute[186018]: 2026-01-05 21:39:38.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:39:40 compute-0 podman[255797]: 2026-01-05 21:39:40.711086807 +0000 UTC m=+0.065660286 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:39:41 compute-0 nova_compute[186018]: 2026-01-05 21:39:41.120 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:41 compute-0 nova_compute[186018]: 2026-01-05 21:39:41.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:39:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:39:42.878 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:39:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:39:42.879 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:39:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:39:42.880 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:39:43 compute-0 nova_compute[186018]: 2026-01-05 21:39:43.149 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:46 compute-0 nova_compute[186018]: 2026-01-05 21:39:46.123 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:47 compute-0 podman[255821]: 2026-01-05 21:39:47.728514394 +0000 UTC m=+0.075306322 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, release=1214.1726694543, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, name=ubi9, release-0.7.12=, io.openshift.tags=base rhel9, com.redhat.component=ubi9-container, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:39:47 compute-0 podman[255822]: 2026-01-05 21:39:47.766307984 +0000 UTC m=+0.103783926 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible)
Jan 05 21:39:48 compute-0 nova_compute[186018]: 2026-01-05 21:39:48.151 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:51 compute-0 nova_compute[186018]: 2026-01-05 21:39:51.125 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:51 compute-0 podman[255861]: 2026-01-05 21:39:51.734901998 +0000 UTC m=+0.093821589 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, tcib_managed=true, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 05 21:39:53 compute-0 nova_compute[186018]: 2026-01-05 21:39:53.155 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:56 compute-0 nova_compute[186018]: 2026-01-05 21:39:56.127 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:58 compute-0 nova_compute[186018]: 2026-01-05 21:39:58.157 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:39:59 compute-0 podman[255906]: 2026-01-05 21:39:59.745192138 +0000 UTC m=+0.089303936 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, release=1755695350, config_id=openstack_network_exporter, vendor=Red Hat, Inc., version=9.6, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vcs-type=git, io.openshift.expose-services=, name=ubi9-minimal, distribution-scope=public, url=https://catalog.redhat.com/en/search?searchType=containers, container_name=openstack_network_exporter, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:39:59 compute-0 podman[202426]: time="2026-01-05T21:39:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:39:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:39:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:39:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:39:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4834 "" "Go-http-client/1.1"
Jan 05 21:39:59 compute-0 podman[255905]: 2026-01-05 21:39:59.778047922 +0000 UTC m=+0.128942565 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller)
Jan 05 21:40:01 compute-0 nova_compute[186018]: 2026-01-05 21:40:01.130 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:01 compute-0 openstack_network_exporter[205720]: ERROR   21:40:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:40:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:40:01 compute-0 openstack_network_exporter[205720]: ERROR   21:40:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:40:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:40:03 compute-0 nova_compute[186018]: 2026-01-05 21:40:03.160 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:04 compute-0 podman[255949]: 2026-01-05 21:40:04.725530565 +0000 UTC m=+0.065918803 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:40:04 compute-0 podman[255948]: 2026-01-05 21:40:04.759739952 +0000 UTC m=+0.091780675 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202)
Jan 05 21:40:06 compute-0 nova_compute[186018]: 2026-01-05 21:40:06.133 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:08 compute-0 nova_compute[186018]: 2026-01-05 21:40:08.162 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:11 compute-0 nova_compute[186018]: 2026-01-05 21:40:11.137 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:11 compute-0 podman[255988]: 2026-01-05 21:40:11.71548025 +0000 UTC m=+0.065827251 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:40:13 compute-0 nova_compute[186018]: 2026-01-05 21:40:13.164 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:16 compute-0 nova_compute[186018]: 2026-01-05 21:40:16.139 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:18 compute-0 nova_compute[186018]: 2026-01-05 21:40:18.167 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:18 compute-0 podman[256012]: 2026-01-05 21:40:18.723934541 +0000 UTC m=+0.063517548 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:40:18 compute-0 podman[256011]: 2026-01-05 21:40:18.743502352 +0000 UTC m=+0.086573119 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-container, name=ubi9, io.openshift.expose-services=, release=1214.1726694543, vcs-type=git, managed_by=edpm_ansible, summary=Provides the latest release of Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, release-0.7.12=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64, container_name=kepler, io.buildah.version=1.29.0, maintainer=Red Hat, Inc., config_id=kepler, distribution-scope=public)
Jan 05 21:40:20 compute-0 systemd[1]: virtproxyd.service: Deactivated successfully.
Jan 05 21:40:21 compute-0 nova_compute[186018]: 2026-01-05 21:40:21.142 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:22 compute-0 podman[256052]: 2026-01-05 21:40:22.789349108 +0000 UTC m=+0.132662333 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible)
Jan 05 21:40:23 compute-0 nova_compute[186018]: 2026-01-05 21:40:23.171 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:26 compute-0 nova_compute[186018]: 2026-01-05 21:40:26.144 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:28 compute-0 nova_compute[186018]: 2026-01-05 21:40:28.174 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:28 compute-0 nova_compute[186018]: 2026-01-05 21:40:28.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:40:28 compute-0 nova_compute[186018]: 2026-01-05 21:40:28.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:40:29 compute-0 podman[202426]: time="2026-01-05T21:40:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:40:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:40:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:40:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:40:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4832 "" "Go-http-client/1.1"
Jan 05 21:40:30 compute-0 podman[256073]: 2026-01-05 21:40:30.815921045 +0000 UTC m=+0.139315074 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, name=ubi9-minimal, version=9.6, com.redhat.component=ubi9-minimal-container, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, managed_by=edpm_ansible, container_name=openstack_network_exporter, url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, build-date=2025-08-20T13:12:41, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7)
Jan 05 21:40:30 compute-0 podman[256072]: 2026-01-05 21:40:30.836596442 +0000 UTC m=+0.169134231 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:40:31 compute-0 nova_compute[186018]: 2026-01-05 21:40:31.146 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:31 compute-0 openstack_network_exporter[205720]: ERROR   21:40:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:40:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:40:31 compute-0 openstack_network_exporter[205720]: ERROR   21:40:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:40:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:40:32 compute-0 nova_compute[186018]: 2026-01-05 21:40:32.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:40:32 compute-0 nova_compute[186018]: 2026-01-05 21:40:32.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:40:32 compute-0 nova_compute[186018]: 2026-01-05 21:40:32.794 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:40:32 compute-0 nova_compute[186018]: 2026-01-05 21:40:32.795 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:40:32 compute-0 nova_compute[186018]: 2026-01-05 21:40:32.795 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:40:33 compute-0 nova_compute[186018]: 2026-01-05 21:40:33.178 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:33 compute-0 nova_compute[186018]: 2026-01-05 21:40:33.954 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Updating instance_info_cache with network_info: [{"id": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "address": "fa:16:3e:f6:00:12", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd05ce4e7-0f", "ovs_interfaceid": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:40:33 compute-0 nova_compute[186018]: 2026-01-05 21:40:33.971 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:40:33 compute-0 nova_compute[186018]: 2026-01-05 21:40:33.972 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:40:33 compute-0 nova_compute[186018]: 2026-01-05 21:40:33.972 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:40:33 compute-0 nova_compute[186018]: 2026-01-05 21:40:33.990 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:40:33 compute-0 nova_compute[186018]: 2026-01-05 21:40:33.991 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:40:33 compute-0 nova_compute[186018]: 2026-01-05 21:40:33.991 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:40:33 compute-0 nova_compute[186018]: 2026-01-05 21:40:33.991 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:40:34 compute-0 nova_compute[186018]: 2026-01-05 21:40:34.075 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:40:34 compute-0 nova_compute[186018]: 2026-01-05 21:40:34.142 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:40:34 compute-0 nova_compute[186018]: 2026-01-05 21:40:34.144 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:40:34 compute-0 nova_compute[186018]: 2026-01-05 21:40:34.207 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:40:34 compute-0 nova_compute[186018]: 2026-01-05 21:40:34.214 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:40:34 compute-0 nova_compute[186018]: 2026-01-05 21:40:34.271 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.057s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:40:34 compute-0 nova_compute[186018]: 2026-01-05 21:40:34.272 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:40:34 compute-0 nova_compute[186018]: 2026-01-05 21:40:34.344 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.072s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:40:34 compute-0 nova_compute[186018]: 2026-01-05 21:40:34.352 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:40:34 compute-0 nova_compute[186018]: 2026-01-05 21:40:34.447 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:40:34 compute-0 nova_compute[186018]: 2026-01-05 21:40:34.450 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:40:34 compute-0 nova_compute[186018]: 2026-01-05 21:40:34.517 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.067s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:40:34 compute-0 nova_compute[186018]: 2026-01-05 21:40:34.955 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:40:34 compute-0 nova_compute[186018]: 2026-01-05 21:40:34.957 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4679MB free_disk=72.25796127319336GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:40:34 compute-0 nova_compute[186018]: 2026-01-05 21:40:34.957 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:40:34 compute-0 nova_compute[186018]: 2026-01-05 21:40:34.958 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:40:35 compute-0 nova_compute[186018]: 2026-01-05 21:40:35.041 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:40:35 compute-0 nova_compute[186018]: 2026-01-05 21:40:35.041 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance fe15eddf-ceea-4584-95df-dc1ea54e3c25 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:40:35 compute-0 nova_compute[186018]: 2026-01-05 21:40:35.041 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 66b489b4-d427-4eb3-b712-aa91b1410874 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:40:35 compute-0 nova_compute[186018]: 2026-01-05 21:40:35.042 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:40:35 compute-0 nova_compute[186018]: 2026-01-05 21:40:35.042 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:40:35 compute-0 nova_compute[186018]: 2026-01-05 21:40:35.119 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:40:35 compute-0 nova_compute[186018]: 2026-01-05 21:40:35.135 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:40:35 compute-0 nova_compute[186018]: 2026-01-05 21:40:35.138 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:40:35 compute-0 nova_compute[186018]: 2026-01-05 21:40:35.138 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.180s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:40:35 compute-0 podman[256135]: 2026-01-05 21:40:35.757018997 +0000 UTC m=+0.103117665 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:40:35 compute-0 podman[256134]: 2026-01-05 21:40:35.765726454 +0000 UTC m=+0.113872697 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, io.buildah.version=1.41.3, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 05 21:40:36 compute-0 nova_compute[186018]: 2026-01-05 21:40:36.149 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:36 compute-0 nova_compute[186018]: 2026-01-05 21:40:36.628 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:40:36 compute-0 nova_compute[186018]: 2026-01-05 21:40:36.629 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:40:36 compute-0 nova_compute[186018]: 2026-01-05 21:40:36.630 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:40:38 compute-0 nova_compute[186018]: 2026-01-05 21:40:38.183 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:39 compute-0 nova_compute[186018]: 2026-01-05 21:40:39.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:40:40 compute-0 nova_compute[186018]: 2026-01-05 21:40:40.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:40:41 compute-0 nova_compute[186018]: 2026-01-05 21:40:41.152 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:41 compute-0 nova_compute[186018]: 2026-01-05 21:40:41.456 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:40:42 compute-0 podman[256174]: 2026-01-05 21:40:42.737149059 +0000 UTC m=+0.081520629 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:40:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:40:42.879 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:40:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:40:42.880 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:40:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:40:42.880 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:40:43 compute-0 nova_compute[186018]: 2026-01-05 21:40:43.187 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:43 compute-0 nova_compute[186018]: 2026-01-05 21:40:43.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:40:46 compute-0 nova_compute[186018]: 2026-01-05 21:40:46.155 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:48 compute-0 nova_compute[186018]: 2026-01-05 21:40:48.190 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:49 compute-0 podman[256199]: 2026-01-05 21:40:49.750374102 +0000 UTC m=+0.094051077 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:40:49 compute-0 podman[256198]: 2026-01-05 21:40:49.774000562 +0000 UTC m=+0.121627532 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vendor=Red Hat, Inc., config_id=kepler, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, architecture=x86_64, distribution-scope=public, release-0.7.12=, com.redhat.component=ubi9-container, release=1214.1726694543, io.buildah.version=1.29.0, managed_by=edpm_ansible, vcs-type=git, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.expose-services=, build-date=2024-09-18T21:23:30, name=ubi9, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:40:51 compute-0 nova_compute[186018]: 2026-01-05 21:40:51.159 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:53 compute-0 nova_compute[186018]: 2026-01-05 21:40:53.192 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:53 compute-0 podman[256236]: 2026-01-05 21:40:53.77796222 +0000 UTC m=+0.121815719 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251224, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 21:40:56 compute-0 nova_compute[186018]: 2026-01-05 21:40:56.161 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:58 compute-0 nova_compute[186018]: 2026-01-05 21:40:58.194 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:40:59 compute-0 podman[202426]: time="2026-01-05T21:40:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:40:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:40:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:40:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:40:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4832 "" "Go-http-client/1.1"
Jan 05 21:41:01 compute-0 nova_compute[186018]: 2026-01-05 21:41:01.163 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:01 compute-0 openstack_network_exporter[205720]: ERROR   21:41:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:41:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:41:01 compute-0 openstack_network_exporter[205720]: ERROR   21:41:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:41:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:41:01 compute-0 podman[256256]: 2026-01-05 21:41:01.746398621 +0000 UTC m=+0.095519604 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, version=9.6, config_id=openstack_network_exporter, container_name=openstack_network_exporter, vcs-type=git, architecture=x86_64, io.buildah.version=1.33.7, managed_by=edpm_ansible, release=1755695350, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, io.openshift.expose-services=, maintainer=Red Hat, Inc., com.redhat.component=ubi9-minimal-container)
Jan 05 21:41:01 compute-0 podman[256255]: 2026-01-05 21:41:01.818449508 +0000 UTC m=+0.161122956 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS)
Jan 05 21:41:03 compute-0 nova_compute[186018]: 2026-01-05 21:41:03.198 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:06 compute-0 nova_compute[186018]: 2026-01-05 21:41:06.165 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:06 compute-0 podman[256298]: 2026-01-05 21:41:06.74003918 +0000 UTC m=+0.079057521 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:41:06 compute-0 podman[256297]: 2026-01-05 21:41:06.748833429 +0000 UTC m=+0.102828506 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.791 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.791 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.791 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.792 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163ef8d040>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.798 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '66b489b4-d427-4eb3-b712-aa91b1410874', 'name': 'te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'be6cfe06-61ed-4c76-8e1d-bc9df6929005'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '0d77496083304392a3bddf3b3cc09d6f', 'user_id': '4adc8921daaf44d4b88d43bd5764da44', 'hostId': '3ca26c7ed0445332f9f9d5b660e6197db7ba063b9bde1e989d152df8', 'status': 'active', 'metadata': {'metering.server_group': '592ac083-4e5e-4ede-94dc-941b228764d4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.801 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '62f57876-af2d-4771-bffd-c87b7755cc5c', 'name': 'tempest-AttachInterfacesUnderV243Test-server-306597775', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e0899289c7dd4631b4fa69150a914123', 'user_id': '168ad639a6ed41c8bd954c434807ef6c', 'hostId': 'c3f8712f401137fbbdc6483d36c041bcfcf3dfa8c8dce0a58aba2f1b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.804 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fe15eddf-ceea-4584-95df-dc1ea54e3c25', 'name': 'te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'be6cfe06-61ed-4c76-8e1d-bc9df6929005'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '0d77496083304392a3bddf3b3cc09d6f', 'user_id': '4adc8921daaf44d4b88d43bd5764da44', 'hostId': '3ca26c7ed0445332f9f9d5b660e6197db7ba063b9bde1e989d152df8', 'status': 'active', 'metadata': {'metering.server_group': '592ac083-4e5e-4ede-94dc-941b228764d4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.804 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.804 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.804 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.805 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.806 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.806 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.806 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.806 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.806 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.807 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.808 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:41:07.804989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.808 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:41:07.807090) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.810 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.813 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.817 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.818 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.818 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.818 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.818 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.818 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.819 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.819 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.819 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.819 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.819 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.820 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.820 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.820 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.820 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.820 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.820 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.820 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:41:07.819024) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.820 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.820 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:41:07.820497) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.821 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.821 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.821 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.821 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.821 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.821 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.821 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.822 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.822 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.822 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:41:07.821791) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.822 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.822 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.822 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.822 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.822 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.823 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.823 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.823 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:41:07.822823) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.823 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.824 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.824 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.824 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.824 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.824 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.824 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.824 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.825 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.825 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.825 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.825 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.825 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.825 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.825 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.825 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.826 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.825 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:41:07.824294) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.826 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.826 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.827 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.827 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:41:07.825864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.827 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.827 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.827 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.827 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.827 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.828 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.828 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.828 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:41:07.827834) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.828 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.829 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.829 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.829 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.829 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.829 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:41:07.829870) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.848 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.allocation volume: 30023680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.848 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.861 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.861 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.873 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.873 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.874 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.874 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.874 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.874 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.874 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.874 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.875 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.875 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.875 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.875 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.875 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.876 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.876 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.876 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.876 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.876 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.876 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.876 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.876 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.877 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.877 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.877 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.877 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.877 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.877 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.877 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.878 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:41:07.874869) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.878 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:41:07.876518) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.878 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:41:07.877810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.895 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/memory.usage volume: 43.44140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.915 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/memory.usage volume: 42.60546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.935 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/memory.usage volume: 42.24609375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.936 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.936 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.936 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.936 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.936 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.936 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.936 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.937 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.937 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.937 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:41:07.936817) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.938 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.938 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.938 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.938 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.938 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.938 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.938 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.939 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.939 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.939 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.939 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.940 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.940 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.940 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.940 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.940 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.941 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.941 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:41:07.938588) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.941 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:41:07.941126) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.974 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:07.974 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.007 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 31029760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.008 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.072 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.bytes volume: 30808576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.073 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.074 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.074 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.075 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.075 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.075 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.075 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.076 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.bytes volume: 2060 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.076 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes volume: 4311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.077 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.bytes volume: 1688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.078 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.078 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.078 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.078 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.079 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.079 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.079 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.latency volume: 470547540 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.080 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.latency volume: 52877300 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.080 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 519177861 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.081 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 51692234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.081 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.latency volume: 603913622 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.082 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.latency volume: 71189160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.083 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.084 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.084 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.084 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.085 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.085 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 1138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.086 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.087 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.requests volume: 1111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.087 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.088 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.089 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:41:08.075864) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.089 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.089 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:41:08.079383) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.089 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.090 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.090 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:41:08.084606) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.090 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.090 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.091 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.091 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.092 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:41:08.090194) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.092 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.093 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.093 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.094 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.095 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.095 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.095 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.096 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.096 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.097 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.097 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 73068544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.098 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:41:08.096062) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.098 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.099 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.bytes volume: 73170944 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.099 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.100 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.101 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.101 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.101 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.101 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.101 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.102 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/cpu volume: 167830000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.102 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/cpu volume: 41240000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.103 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/cpu volume: 334470000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.104 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.104 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.105 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.105 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.105 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.105 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:41:08.101879) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.106 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.106 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.latency volume: 2697062897 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:41:08.106052) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.107 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.108 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 13557622904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.108 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.109 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.latency volume: 3937989191 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.109 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.110 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.111 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.111 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.111 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.111 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.111 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.111 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.requests volume: 319 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.112 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.112 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:41:08.111832) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.112 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.113 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.113 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.requests volume: 339 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.114 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.114 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.115 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.116 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:41:08.117 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:41:08 compute-0 nova_compute[186018]: 2026-01-05 21:41:08.200 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:11 compute-0 nova_compute[186018]: 2026-01-05 21:41:11.169 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:13 compute-0 nova_compute[186018]: 2026-01-05 21:41:13.203 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:13 compute-0 podman[256341]: 2026-01-05 21:41:13.713867051 +0000 UTC m=+0.060392158 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:41:16 compute-0 nova_compute[186018]: 2026-01-05 21:41:16.172 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:18 compute-0 nova_compute[186018]: 2026-01-05 21:41:18.205 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:20 compute-0 podman[256365]: 2026-01-05 21:41:20.717819568 +0000 UTC m=+0.075987123 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, architecture=x86_64, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_id=kepler, vendor=Red Hat, Inc., io.buildah.version=1.29.0, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, release-0.7.12=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, vcs-type=git, managed_by=edpm_ansible, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, version=9.4, container_name=kepler, release=1214.1726694543, name=ubi9)
Jan 05 21:41:20 compute-0 podman[256366]: 2026-01-05 21:41:20.76636596 +0000 UTC m=+0.119320100 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, tcib_managed=true)
Jan 05 21:41:21 compute-0 nova_compute[186018]: 2026-01-05 21:41:21.175 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:23 compute-0 nova_compute[186018]: 2026-01-05 21:41:23.210 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:24 compute-0 podman[256404]: 2026-01-05 21:41:24.762482318 +0000 UTC m=+0.105402057 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, io.buildah.version=1.41.4, config_id=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true)
Jan 05 21:41:26 compute-0 nova_compute[186018]: 2026-01-05 21:41:26.178 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:28 compute-0 nova_compute[186018]: 2026-01-05 21:41:28.213 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:29 compute-0 nova_compute[186018]: 2026-01-05 21:41:29.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:41:29 compute-0 nova_compute[186018]: 2026-01-05 21:41:29.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:41:29 compute-0 podman[202426]: time="2026-01-05T21:41:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:41:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:41:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:41:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:41:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4839 "" "Go-http-client/1.1"
Jan 05 21:41:31 compute-0 nova_compute[186018]: 2026-01-05 21:41:31.182 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:31 compute-0 openstack_network_exporter[205720]: ERROR   21:41:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:41:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:41:31 compute-0 openstack_network_exporter[205720]: ERROR   21:41:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:41:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:41:32 compute-0 podman[256424]: 2026-01-05 21:41:32.761661445 +0000 UTC m=+0.098845579 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, version=9.6, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., container_name=openstack_network_exporter, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., release=1755695350, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, architecture=x86_64, build-date=2025-08-20T13:12:41, name=ubi9-minimal, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=, vcs-type=git, config_id=openstack_network_exporter)
Jan 05 21:41:32 compute-0 podman[256423]: 2026-01-05 21:41:32.809023779 +0000 UTC m=+0.157569834 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ovn_controller, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3)
Jan 05 21:41:33 compute-0 nova_compute[186018]: 2026-01-05 21:41:33.215 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:33 compute-0 nova_compute[186018]: 2026-01-05 21:41:33.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:41:33 compute-0 nova_compute[186018]: 2026-01-05 21:41:33.495 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:41:33 compute-0 nova_compute[186018]: 2026-01-05 21:41:33.496 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:41:33 compute-0 nova_compute[186018]: 2026-01-05 21:41:33.500 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.003s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:41:33 compute-0 nova_compute[186018]: 2026-01-05 21:41:33.502 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:41:33 compute-0 nova_compute[186018]: 2026-01-05 21:41:33.608 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:41:33 compute-0 nova_compute[186018]: 2026-01-05 21:41:33.677 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:41:33 compute-0 nova_compute[186018]: 2026-01-05 21:41:33.679 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:41:33 compute-0 nova_compute[186018]: 2026-01-05 21:41:33.775 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:41:33 compute-0 nova_compute[186018]: 2026-01-05 21:41:33.785 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:41:33 compute-0 nova_compute[186018]: 2026-01-05 21:41:33.846 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:41:33 compute-0 nova_compute[186018]: 2026-01-05 21:41:33.847 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:41:33 compute-0 nova_compute[186018]: 2026-01-05 21:41:33.912 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.065s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:41:33 compute-0 nova_compute[186018]: 2026-01-05 21:41:33.919 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:41:34 compute-0 nova_compute[186018]: 2026-01-05 21:41:34.008 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.089s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:41:34 compute-0 nova_compute[186018]: 2026-01-05 21:41:34.011 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:41:34 compute-0 nova_compute[186018]: 2026-01-05 21:41:34.079 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.068s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:41:34 compute-0 nova_compute[186018]: 2026-01-05 21:41:34.473 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:41:34 compute-0 nova_compute[186018]: 2026-01-05 21:41:34.475 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4615MB free_disk=72.25796127319336GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:41:34 compute-0 nova_compute[186018]: 2026-01-05 21:41:34.475 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:41:34 compute-0 nova_compute[186018]: 2026-01-05 21:41:34.476 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:41:34 compute-0 nova_compute[186018]: 2026-01-05 21:41:34.548 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:41:34 compute-0 nova_compute[186018]: 2026-01-05 21:41:34.548 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance fe15eddf-ceea-4584-95df-dc1ea54e3c25 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:41:34 compute-0 nova_compute[186018]: 2026-01-05 21:41:34.549 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 66b489b4-d427-4eb3-b712-aa91b1410874 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:41:34 compute-0 nova_compute[186018]: 2026-01-05 21:41:34.549 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:41:34 compute-0 nova_compute[186018]: 2026-01-05 21:41:34.550 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:41:34 compute-0 nova_compute[186018]: 2026-01-05 21:41:34.621 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:41:34 compute-0 nova_compute[186018]: 2026-01-05 21:41:34.641 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:41:34 compute-0 nova_compute[186018]: 2026-01-05 21:41:34.642 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:41:34 compute-0 nova_compute[186018]: 2026-01-05 21:41:34.643 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.167s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:41:35 compute-0 nova_compute[186018]: 2026-01-05 21:41:35.644 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:41:35 compute-0 nova_compute[186018]: 2026-01-05 21:41:35.645 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:41:35 compute-0 nova_compute[186018]: 2026-01-05 21:41:35.645 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:41:36 compute-0 nova_compute[186018]: 2026-01-05 21:41:36.184 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:36 compute-0 nova_compute[186018]: 2026-01-05 21:41:36.815 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-66b489b4-d427-4eb3-b712-aa91b1410874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:41:36 compute-0 nova_compute[186018]: 2026-01-05 21:41:36.815 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-66b489b4-d427-4eb3-b712-aa91b1410874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:41:36 compute-0 nova_compute[186018]: 2026-01-05 21:41:36.816 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:41:37 compute-0 podman[256486]: 2026-01-05 21:41:37.732611795 +0000 UTC m=+0.084068940 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3)
Jan 05 21:41:37 compute-0 podman[256487]: 2026-01-05 21:41:37.748153049 +0000 UTC m=+0.091534167 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 21:41:38 compute-0 nova_compute[186018]: 2026-01-05 21:41:38.221 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:40 compute-0 nova_compute[186018]: 2026-01-05 21:41:40.995 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Updating instance_info_cache with network_info: [{"id": "76d8404e-3237-44da-934d-3e7e8792c114", "address": "fa:16:3e:58:ee:ae", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d8404e-32", "ovs_interfaceid": "76d8404e-3237-44da-934d-3e7e8792c114", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:41:41 compute-0 nova_compute[186018]: 2026-01-05 21:41:41.020 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-66b489b4-d427-4eb3-b712-aa91b1410874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:41:41 compute-0 nova_compute[186018]: 2026-01-05 21:41:41.021 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:41:41 compute-0 nova_compute[186018]: 2026-01-05 21:41:41.023 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:41:41 compute-0 nova_compute[186018]: 2026-01-05 21:41:41.024 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:41:41 compute-0 nova_compute[186018]: 2026-01-05 21:41:41.188 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:41 compute-0 nova_compute[186018]: 2026-01-05 21:41:41.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:41:41 compute-0 nova_compute[186018]: 2026-01-05 21:41:41.463 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:41:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:41:42.881 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:41:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:41:42.882 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:41:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:41:42.883 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:41:43 compute-0 nova_compute[186018]: 2026-01-05 21:41:43.222 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:44 compute-0 podman[256528]: 2026-01-05 21:41:44.764869642 +0000 UTC m=+0.123379948 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:41:45 compute-0 nova_compute[186018]: 2026-01-05 21:41:45.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:41:46 compute-0 nova_compute[186018]: 2026-01-05 21:41:46.191 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:46 compute-0 nova_compute[186018]: 2026-01-05 21:41:46.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:41:48 compute-0 nova_compute[186018]: 2026-01-05 21:41:48.225 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:49 compute-0 nova_compute[186018]: 2026-01-05 21:41:49.476 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:41:49 compute-0 nova_compute[186018]: 2026-01-05 21:41:49.477 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 21:41:51 compute-0 nova_compute[186018]: 2026-01-05 21:41:51.194 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:51 compute-0 podman[256553]: 2026-01-05 21:41:51.745845581 +0000 UTC m=+0.097426684 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, config_id=kepler, distribution-scope=public, managed_by=edpm_ansible, architecture=x86_64, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, version=9.4, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, container_name=kepler, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 05 21:41:51 compute-0 podman[256554]: 2026-01-05 21:41:51.790564841 +0000 UTC m=+0.126965302 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Jan 05 21:41:53 compute-0 nova_compute[186018]: 2026-01-05 21:41:53.230 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:54 compute-0 nova_compute[186018]: 2026-01-05 21:41:54.493 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:41:54 compute-0 nova_compute[186018]: 2026-01-05 21:41:54.493 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 21:41:54 compute-0 nova_compute[186018]: 2026-01-05 21:41:54.521 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 05 21:41:55 compute-0 podman[256592]: 2026-01-05 21:41:55.755377955 +0000 UTC m=+0.099323814 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, container_name=ceilometer_agent_compute, managed_by=edpm_ansible)
Jan 05 21:41:56 compute-0 nova_compute[186018]: 2026-01-05 21:41:56.197 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:58 compute-0 nova_compute[186018]: 2026-01-05 21:41:58.231 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:41:59 compute-0 podman[202426]: time="2026-01-05T21:41:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:41:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:41:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:41:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:41:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4834 "" "Go-http-client/1.1"
Jan 05 21:42:01 compute-0 nova_compute[186018]: 2026-01-05 21:42:01.200 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:01 compute-0 openstack_network_exporter[205720]: ERROR   21:42:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:42:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:42:01 compute-0 openstack_network_exporter[205720]: ERROR   21:42:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:42:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:42:03 compute-0 nova_compute[186018]: 2026-01-05 21:42:03.234 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:03 compute-0 podman[256611]: 2026-01-05 21:42:03.741451766 +0000 UTC m=+0.078287297 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.expose-services=, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., architecture=x86_64, managed_by=edpm_ansible, com.redhat.component=ubi9-minimal-container, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://catalog.redhat.com/en/search?searchType=containers, vendor=Red Hat, Inc., build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, release=1755695350, maintainer=Red Hat, Inc., version=9.6, config_id=openstack_network_exporter, container_name=openstack_network_exporter, distribution-scope=public, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.buildah.version=1.33.7, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, vcs-type=git, io.openshift.tags=minimal rhel9)
Jan 05 21:42:03 compute-0 podman[256610]: 2026-01-05 21:42:03.778884644 +0000 UTC m=+0.127454407 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, container_name=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_controller, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:42:06 compute-0 nova_compute[186018]: 2026-01-05 21:42:06.203 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:08 compute-0 nova_compute[186018]: 2026-01-05 21:42:08.237 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:08 compute-0 podman[256655]: 2026-01-05 21:42:08.714881044 +0000 UTC m=+0.063155896 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:42:08 compute-0 podman[256656]: 2026-01-05 21:42:08.776821091 +0000 UTC m=+0.107234906 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:42:11 compute-0 nova_compute[186018]: 2026-01-05 21:42:11.207 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:13 compute-0 nova_compute[186018]: 2026-01-05 21:42:13.241 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:15 compute-0 podman[256696]: 2026-01-05 21:42:15.711050255 +0000 UTC m=+0.069495237 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>)
Jan 05 21:42:16 compute-0 nova_compute[186018]: 2026-01-05 21:42:16.209 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:18 compute-0 nova_compute[186018]: 2026-01-05 21:42:18.242 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:21 compute-0 nova_compute[186018]: 2026-01-05 21:42:21.213 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:22 compute-0 podman[256721]: 2026-01-05 21:42:22.728437001 +0000 UTC m=+0.075543180 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.component=ubi9-container, io.k8s.display-name=Red Hat Universal Base Image 9, maintainer=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., managed_by=edpm_ansible, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, container_name=kepler, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.openshift.expose-services=, release=1214.1726694543, version=9.4, release-0.7.12=, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, architecture=x86_64, build-date=2024-09-18T21:23:30, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, distribution-scope=public, vendor=Red Hat, Inc.)
Jan 05 21:42:22 compute-0 podman[256722]: 2026-01-05 21:42:22.740305368 +0000 UTC m=+0.078866715 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3)
Jan 05 21:42:23 compute-0 nova_compute[186018]: 2026-01-05 21:42:23.245 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:26 compute-0 nova_compute[186018]: 2026-01-05 21:42:26.216 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:26 compute-0 podman[256761]: 2026-01-05 21:42:26.783989066 +0000 UTC m=+0.131662332 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0)
Jan 05 21:42:28 compute-0 nova_compute[186018]: 2026-01-05 21:42:28.250 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:29 compute-0 podman[202426]: time="2026-01-05T21:42:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:42:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:42:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:42:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:42:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4838 "" "Go-http-client/1.1"
Jan 05 21:42:30 compute-0 nova_compute[186018]: 2026-01-05 21:42:30.490 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:42:30 compute-0 nova_compute[186018]: 2026-01-05 21:42:30.490 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:42:31 compute-0 nova_compute[186018]: 2026-01-05 21:42:31.219 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:31 compute-0 openstack_network_exporter[205720]: ERROR   21:42:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:42:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:42:31 compute-0 openstack_network_exporter[205720]: ERROR   21:42:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:42:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.250 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.493 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.494 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.494 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.495 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.597 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.700 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.702 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.763 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.771 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.845 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.074s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.846 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.904 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.911 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.970 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.059s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:42:33 compute-0 nova_compute[186018]: 2026-01-05 21:42:33.972 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:42:34 compute-0 nova_compute[186018]: 2026-01-05 21:42:34.069 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.097s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:42:34 compute-0 nova_compute[186018]: 2026-01-05 21:42:34.449 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:42:34 compute-0 nova_compute[186018]: 2026-01-05 21:42:34.451 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4613MB free_disk=72.25800323486328GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:42:34 compute-0 nova_compute[186018]: 2026-01-05 21:42:34.452 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:42:34 compute-0 nova_compute[186018]: 2026-01-05 21:42:34.452 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:42:34 compute-0 nova_compute[186018]: 2026-01-05 21:42:34.602 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:42:34 compute-0 nova_compute[186018]: 2026-01-05 21:42:34.604 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance fe15eddf-ceea-4584-95df-dc1ea54e3c25 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:42:34 compute-0 nova_compute[186018]: 2026-01-05 21:42:34.605 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 66b489b4-d427-4eb3-b712-aa91b1410874 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:42:34 compute-0 nova_compute[186018]: 2026-01-05 21:42:34.606 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:42:34 compute-0 nova_compute[186018]: 2026-01-05 21:42:34.607 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:42:34 compute-0 podman[256801]: 2026-01-05 21:42:34.773783585 +0000 UTC m=+0.115351583 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, org.label-schema.vendor=CentOS, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 05 21:42:34 compute-0 podman[256802]: 2026-01-05 21:42:34.775555302 +0000 UTC m=+0.107505945 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, release=1755695350, vcs-type=git, architecture=x86_64, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., managed_by=edpm_ansible, name=ubi9-minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=)
Jan 05 21:42:34 compute-0 nova_compute[186018]: 2026-01-05 21:42:34.804 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:42:34 compute-0 nova_compute[186018]: 2026-01-05 21:42:34.822 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:42:34 compute-0 nova_compute[186018]: 2026-01-05 21:42:34.824 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:42:34 compute-0 nova_compute[186018]: 2026-01-05 21:42:34.825 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.372s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:42:35 compute-0 nova_compute[186018]: 2026-01-05 21:42:35.825 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:42:35 compute-0 nova_compute[186018]: 2026-01-05 21:42:35.826 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:42:35 compute-0 nova_compute[186018]: 2026-01-05 21:42:35.826 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:42:35 compute-0 nova_compute[186018]: 2026-01-05 21:42:35.826 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:42:36 compute-0 nova_compute[186018]: 2026-01-05 21:42:36.221 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:36 compute-0 nova_compute[186018]: 2026-01-05 21:42:36.830 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:42:36 compute-0 nova_compute[186018]: 2026-01-05 21:42:36.831 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:42:36 compute-0 nova_compute[186018]: 2026-01-05 21:42:36.832 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:42:36 compute-0 nova_compute[186018]: 2026-01-05 21:42:36.832 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62f57876-af2d-4771-bffd-c87b7755cc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:42:38 compute-0 nova_compute[186018]: 2026-01-05 21:42:38.253 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:39 compute-0 podman[256845]: 2026-01-05 21:42:39.750859239 +0000 UTC m=+0.093399577 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.build-date=20251202, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent)
Jan 05 21:42:39 compute-0 podman[256846]: 2026-01-05 21:42:39.769762449 +0000 UTC m=+0.106677618 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 21:42:40 compute-0 nova_compute[186018]: 2026-01-05 21:42:40.080 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updating instance_info_cache with network_info: [{"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:42:40 compute-0 nova_compute[186018]: 2026-01-05 21:42:40.094 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:42:40 compute-0 nova_compute[186018]: 2026-01-05 21:42:40.094 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:42:40 compute-0 nova_compute[186018]: 2026-01-05 21:42:40.095 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:42:40 compute-0 nova_compute[186018]: 2026-01-05 21:42:40.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:42:41 compute-0 nova_compute[186018]: 2026-01-05 21:42:41.224 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:42 compute-0 nova_compute[186018]: 2026-01-05 21:42:42.456 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:42:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:42:42.883 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:42:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:42:42.883 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:42:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:42:42.884 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:42:43 compute-0 nova_compute[186018]: 2026-01-05 21:42:43.256 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:43 compute-0 nova_compute[186018]: 2026-01-05 21:42:43.459 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:42:43 compute-0 nova_compute[186018]: 2026-01-05 21:42:43.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:42:46 compute-0 nova_compute[186018]: 2026-01-05 21:42:46.228 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:46 compute-0 podman[256883]: 2026-01-05 21:42:46.786671028 +0000 UTC m=+0.121952963 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:42:47 compute-0 nova_compute[186018]: 2026-01-05 21:42:47.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:42:48 compute-0 nova_compute[186018]: 2026-01-05 21:42:48.258 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:51 compute-0 nova_compute[186018]: 2026-01-05 21:42:51.232 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:53 compute-0 nova_compute[186018]: 2026-01-05 21:42:53.261 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:53 compute-0 podman[256908]: 2026-01-05 21:42:53.806805189 +0000 UTC m=+0.126598842 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.29.0, managed_by=edpm_ansible, release-0.7.12=, config_id=kepler, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.component=ubi9-container, version=9.4, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=base rhel9, name=ubi9, release=1214.1726694543, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, container_name=kepler, maintainer=Red Hat, Inc., summary=Provides the latest release of Red Hat Universal Base Image 9., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543)
Jan 05 21:42:53 compute-0 podman[256909]: 2026-01-05 21:42:53.830855808 +0000 UTC m=+0.140080034 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:42:56 compute-0 nova_compute[186018]: 2026-01-05 21:42:56.236 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:57 compute-0 podman[256943]: 2026-01-05 21:42:57.753296797 +0000 UTC m=+0.100872984 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_managed=true)
Jan 05 21:42:58 compute-0 nova_compute[186018]: 2026-01-05 21:42:58.265 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:42:59 compute-0 podman[202426]: time="2026-01-05T21:42:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:42:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:42:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:42:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:42:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4829 "" "Go-http-client/1.1"
Jan 05 21:43:01 compute-0 nova_compute[186018]: 2026-01-05 21:43:01.237 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:01 compute-0 openstack_network_exporter[205720]: ERROR   21:43:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:43:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:43:01 compute-0 openstack_network_exporter[205720]: ERROR   21:43:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:43:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:43:03 compute-0 nova_compute[186018]: 2026-01-05 21:43:03.268 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:05 compute-0 podman[256963]: 2026-01-05 21:43:05.784070328 +0000 UTC m=+0.113497280 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=minimal rhel9, architecture=x86_64, distribution-scope=public, container_name=openstack_network_exporter, io.buildah.version=1.33.7, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., build-date=2025-08-20T13:12:41, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, name=ubi9-minimal, vcs-type=git, vendor=Red Hat, Inc., version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, managed_by=edpm_ansible, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., io.openshift.expose-services=, config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers)
Jan 05 21:43:05 compute-0 podman[256962]: 2026-01-05 21:43:05.840161979 +0000 UTC m=+0.186090051 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, managed_by=edpm_ansible)
Jan 05 21:43:06 compute-0 nova_compute[186018]: 2026-01-05 21:43:06.241 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.791 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.792 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.792 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.793 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.793 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.804 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '66b489b4-d427-4eb3-b712-aa91b1410874', 'name': 'te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'be6cfe06-61ed-4c76-8e1d-bc9df6929005'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '0d77496083304392a3bddf3b3cc09d6f', 'user_id': '4adc8921daaf44d4b88d43bd5764da44', 'hostId': '3ca26c7ed0445332f9f9d5b660e6197db7ba063b9bde1e989d152df8', 'status': 'active', 'metadata': {'metering.server_group': '592ac083-4e5e-4ede-94dc-941b228764d4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.805 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.807 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.807 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.808 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f062e10>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.810 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '62f57876-af2d-4771-bffd-c87b7755cc5c', 'name': 'tempest-AttachInterfacesUnderV243Test-server-306597775', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e0899289c7dd4631b4fa69150a914123', 'user_id': '168ad639a6ed41c8bd954c434807ef6c', 'hostId': 'c3f8712f401137fbbdc6483d36c041bcfcf3dfa8c8dce0a58aba2f1b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.816 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fe15eddf-ceea-4584-95df-dc1ea54e3c25', 'name': 'te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'be6cfe06-61ed-4c76-8e1d-bc9df6929005'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '0d77496083304392a3bddf3b3cc09d6f', 'user_id': '4adc8921daaf44d4b88d43bd5764da44', 'hostId': '3ca26c7ed0445332f9f9d5b660e6197db7ba063b9bde1e989d152df8', 'status': 'active', 'metadata': {'metering.server_group': '592ac083-4e5e-4ede-94dc-941b228764d4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.817 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.817 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.817 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.818 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.820 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.820 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.820 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.821 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.821 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.821 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.822 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:43:07.818184) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.823 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:43:07.821763) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.827 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.833 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.839 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets volume: 17 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.840 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.840 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.840 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.840 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.841 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.841 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.841 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.842 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.842 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.843 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.843 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.844 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.844 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.844 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.844 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.844 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.packets volume: 16 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.845 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.846 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:43:07.841217) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.846 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.846 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:43:07.844743) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.847 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.847 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.847 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.847 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.847 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.848 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.849 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.849 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.849 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.849 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.849 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.850 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.bytes volume: 1620 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.850 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.850 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.851 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.851 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.851 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.851 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.852 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.852 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.852 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.852 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.853 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.853 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.853 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.854 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.854 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.854 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.854 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.854 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.854 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.855 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.855 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.855 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.855 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.856 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.856 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.856 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.856 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.856 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.857 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.857 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.857 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.857 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.857 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.858 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.858 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.859 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:43:07.847862) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.859 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:43:07.849900) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.859 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:43:07.852017) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.860 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:43:07.854449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.860 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:43:07.856362) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.860 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:43:07.858155) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.872 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.allocation volume: 30023680 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.872 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.886 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.886 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.899 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.899 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.900 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.900 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.900 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.900 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.900 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.901 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.901 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.901 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.901 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.902 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.902 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.902 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.902 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.902 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.903 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.903 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.903 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.903 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.904 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.904 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.904 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.904 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.904 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.904 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:43:07.900998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:43:07.902980) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.905 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:43:07.904667) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.922 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/memory.usage volume: 43.44140625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.943 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/memory.usage volume: 42.60546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.963 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/memory.usage volume: 42.5859375 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.964 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.964 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.964 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.964 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.964 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.964 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.965 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.965 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:43:07.964809) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.965 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.965 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.965 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.965 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.966 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.966 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.966 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.966 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.966 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.966 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.966 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.967 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.967 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.967 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.968 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.968 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.968 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.968 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.968 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.968 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.969 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:43:07.966424) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:07.969 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:43:07.968665) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.002 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.bytes volume: 29338624 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.003 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.bytes volume: 246078 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.035 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 31029760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.035 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.066 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.bytes volume: 30808576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.066 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.067 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.080 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.080 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.080 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.080 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.081 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.081 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.bytes volume: 2060 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.081 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes volume: 4311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.081 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.bytes volume: 1688 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.081 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.082 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.082 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.082 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.082 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.082 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.082 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.latency volume: 470547540 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.082 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.latency volume: 52877300 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.082 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 519177861 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.083 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 51692234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.083 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.latency volume: 603913622 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.083 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.latency volume: 71189160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.083 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.083 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.084 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.084 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.084 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.084 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.084 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.requests volume: 1056 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.084 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.requests volume: 107 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.084 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 1138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.084 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.085 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.requests volume: 1111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.090 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.090 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.091 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.091 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.091 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.091 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.091 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.091 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.usage volume: 29884416 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.091 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.091 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.092 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.092 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.092 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.092 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.093 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.093 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.093 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.093 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.093 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.093 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.bytes volume: 72855552 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.093 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.093 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 73068544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.094 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.094 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.bytes volume: 73170944 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.094 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.094 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.095 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.095 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.095 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.095 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.095 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.095 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/cpu volume: 287620000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.095 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/cpu volume: 42800000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.095 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/cpu volume: 335920000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.096 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.096 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.096 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.096 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.096 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.096 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.096 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.latency volume: 2697062897 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.097 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.097 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 13557622904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.097 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.097 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.latency volume: 3937989191 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.098 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.098 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.098 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.098 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.098 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.098 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.098 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.099 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.requests volume: 319 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.099 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.099 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.099 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.100 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.requests volume: 339 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.100 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.100 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.101 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.102 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.105 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:43:08.080975) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:43:08.082446) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:43:08.084311) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:43:08.091429) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.106 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:43:08.093475) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:43:08.095420) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:43:08.096841) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:43:08.107 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:43:08.098951) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:43:08 compute-0 nova_compute[186018]: 2026-01-05 21:43:08.270 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:10 compute-0 podman[257009]: 2026-01-05 21:43:10.731866525 +0000 UTC m=+0.072704744 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:43:10 compute-0 podman[257008]: 2026-01-05 21:43:10.746313815 +0000 UTC m=+0.100036959 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, org.label-schema.build-date=20251202)
Jan 05 21:43:11 compute-0 nova_compute[186018]: 2026-01-05 21:43:11.243 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:13 compute-0 nova_compute[186018]: 2026-01-05 21:43:13.274 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:16 compute-0 nova_compute[186018]: 2026-01-05 21:43:16.246 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:17 compute-0 podman[257050]: 2026-01-05 21:43:17.749521245 +0000 UTC m=+0.086734263 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:43:18 compute-0 nova_compute[186018]: 2026-01-05 21:43:18.277 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:21 compute-0 nova_compute[186018]: 2026-01-05 21:43:21.248 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:23 compute-0 nova_compute[186018]: 2026-01-05 21:43:23.279 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:24 compute-0 podman[257074]: 2026-01-05 21:43:24.740048625 +0000 UTC m=+0.080460670 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9, config_id=kepler, container_name=kepler, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, version=9.4, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.tags=base rhel9, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, release-0.7.12=, distribution-scope=public, io.openshift.expose-services=, architecture=x86_64, managed_by=edpm_ansible, vcs-type=git)
Jan 05 21:43:24 compute-0 podman[257075]: 2026-01-05 21:43:24.74572065 +0000 UTC m=+0.084379394 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ceilometer_agent_ipmi)
Jan 05 21:43:26 compute-0 nova_compute[186018]: 2026-01-05 21:43:26.250 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:28 compute-0 nova_compute[186018]: 2026-01-05 21:43:28.283 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:28 compute-0 podman[257113]: 2026-01-05 21:43:28.759702669 +0000 UTC m=+0.102118849 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.license=GPLv2, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 10 Base Image, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_id=ceilometer_agent_compute)
Jan 05 21:43:29 compute-0 podman[202426]: time="2026-01-05T21:43:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:43:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:43:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:43:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:43:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4835 "" "Go-http-client/1.1"
Jan 05 21:43:30 compute-0 nova_compute[186018]: 2026-01-05 21:43:30.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:43:30 compute-0 nova_compute[186018]: 2026-01-05 21:43:30.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:43:31 compute-0 nova_compute[186018]: 2026-01-05 21:43:31.254 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:31 compute-0 openstack_network_exporter[205720]: ERROR   21:43:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:43:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:43:31 compute-0 openstack_network_exporter[205720]: ERROR   21:43:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:43:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.284 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.500 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.501 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.502 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.503 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.589 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.671 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.673 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.733 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.745 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.810 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.066s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.812 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.872 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.880 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.942 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:43:33 compute-0 nova_compute[186018]: 2026-01-05 21:43:33.944 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:43:34 compute-0 nova_compute[186018]: 2026-01-05 21:43:34.039 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.095s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:43:34 compute-0 nova_compute[186018]: 2026-01-05 21:43:34.428 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:43:34 compute-0 nova_compute[186018]: 2026-01-05 21:43:34.431 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4598MB free_disk=72.25800323486328GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:43:34 compute-0 nova_compute[186018]: 2026-01-05 21:43:34.432 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:43:34 compute-0 nova_compute[186018]: 2026-01-05 21:43:34.432 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:43:35 compute-0 nova_compute[186018]: 2026-01-05 21:43:35.091 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:43:35 compute-0 nova_compute[186018]: 2026-01-05 21:43:35.093 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance fe15eddf-ceea-4584-95df-dc1ea54e3c25 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:43:35 compute-0 nova_compute[186018]: 2026-01-05 21:43:35.093 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 66b489b4-d427-4eb3-b712-aa91b1410874 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:43:35 compute-0 nova_compute[186018]: 2026-01-05 21:43:35.094 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:43:35 compute-0 nova_compute[186018]: 2026-01-05 21:43:35.095 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:43:35 compute-0 nova_compute[186018]: 2026-01-05 21:43:35.116 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing inventories for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 21:43:35 compute-0 nova_compute[186018]: 2026-01-05 21:43:35.134 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating ProviderTree inventory for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 21:43:35 compute-0 nova_compute[186018]: 2026-01-05 21:43:35.135 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating inventory in ProviderTree for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 21:43:35 compute-0 nova_compute[186018]: 2026-01-05 21:43:35.160 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing aggregate associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 21:43:35 compute-0 nova_compute[186018]: 2026-01-05 21:43:35.201 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing trait associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_NODE,HW_CPU_X86_BMI,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 21:43:35 compute-0 nova_compute[186018]: 2026-01-05 21:43:35.312 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:43:36 compute-0 nova_compute[186018]: 2026-01-05 21:43:36.024 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:43:36 compute-0 nova_compute[186018]: 2026-01-05 21:43:36.027 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:43:36 compute-0 nova_compute[186018]: 2026-01-05 21:43:36.028 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 1.596s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:43:36 compute-0 nova_compute[186018]: 2026-01-05 21:43:36.256 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:36 compute-0 podman[257151]: 2026-01-05 21:43:36.814552571 +0000 UTC m=+0.147576481 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, vendor=Red Hat, Inc., container_name=openstack_network_exporter, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, release=1755695350, version=9.6, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, maintainer=Red Hat, Inc., architecture=x86_64)
Jan 05 21:43:36 compute-0 podman[257150]: 2026-01-05 21:43:36.834604484 +0000 UTC m=+0.171862387 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, config_id=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202)
Jan 05 21:43:38 compute-0 nova_compute[186018]: 2026-01-05 21:43:38.029 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:43:38 compute-0 nova_compute[186018]: 2026-01-05 21:43:38.030 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:43:38 compute-0 nova_compute[186018]: 2026-01-05 21:43:38.030 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:43:38 compute-0 nova_compute[186018]: 2026-01-05 21:43:38.286 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:38 compute-0 nova_compute[186018]: 2026-01-05 21:43:38.873 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:43:38 compute-0 nova_compute[186018]: 2026-01-05 21:43:38.874 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:43:38 compute-0 nova_compute[186018]: 2026-01-05 21:43:38.874 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:43:40 compute-0 nova_compute[186018]: 2026-01-05 21:43:40.893 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Updating instance_info_cache with network_info: [{"id": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "address": "fa:16:3e:f6:00:12", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd05ce4e7-0f", "ovs_interfaceid": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:43:40 compute-0 nova_compute[186018]: 2026-01-05 21:43:40.918 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:43:40 compute-0 nova_compute[186018]: 2026-01-05 21:43:40.919 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:43:40 compute-0 nova_compute[186018]: 2026-01-05 21:43:40.920 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:43:41 compute-0 nova_compute[186018]: 2026-01-05 21:43:41.258 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:41 compute-0 podman[257196]: 2026-01-05 21:43:41.741511042 +0000 UTC m=+0.084659732 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:43:41 compute-0 podman[257195]: 2026-01-05 21:43:41.747212518 +0000 UTC m=+0.100317457 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_managed=true, container_name=ovn_metadata_agent)
Jan 05 21:43:42 compute-0 nova_compute[186018]: 2026-01-05 21:43:42.459 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:43:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:43:42.884 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:43:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:43:42.885 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:43:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:43:42.885 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:43:43 compute-0 nova_compute[186018]: 2026-01-05 21:43:43.287 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:43 compute-0 nova_compute[186018]: 2026-01-05 21:43:43.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:43:43 compute-0 nova_compute[186018]: 2026-01-05 21:43:43.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:43:46 compute-0 nova_compute[186018]: 2026-01-05 21:43:46.263 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:48 compute-0 nova_compute[186018]: 2026-01-05 21:43:48.289 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:48 compute-0 nova_compute[186018]: 2026-01-05 21:43:48.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:43:48 compute-0 podman[257234]: 2026-01-05 21:43:48.724278997 +0000 UTC m=+0.072439857 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:43:51 compute-0 nova_compute[186018]: 2026-01-05 21:43:51.265 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:53 compute-0 nova_compute[186018]: 2026-01-05 21:43:53.293 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:55 compute-0 podman[257261]: 2026-01-05 21:43:55.731137312 +0000 UTC m=+0.083533319 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, build-date=2024-09-18T21:23:30, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, io.openshift.tags=base rhel9, container_name=kepler, name=ubi9, com.redhat.component=ubi9-container, managed_by=edpm_ansible, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., config_id=kepler, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1214.1726694543, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, maintainer=Red Hat, Inc., vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, architecture=x86_64, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:43:55 compute-0 podman[257262]: 2026-01-05 21:43:55.770689532 +0000 UTC m=+0.105306112 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, tcib_managed=true)
Jan 05 21:43:56 compute-0 nova_compute[186018]: 2026-01-05 21:43:56.269 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:58 compute-0 nova_compute[186018]: 2026-01-05 21:43:58.297 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:43:59 compute-0 podman[202426]: time="2026-01-05T21:43:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:43:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:43:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:43:59 compute-0 podman[257299]: 2026-01-05 21:43:59.76638695 +0000 UTC m=+0.111989126 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, org.label-schema.schema-version=1.0)
Jan 05 21:43:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:43:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4834 "" "Go-http-client/1.1"
Jan 05 21:44:01 compute-0 nova_compute[186018]: 2026-01-05 21:44:01.271 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:01 compute-0 openstack_network_exporter[205720]: ERROR   21:44:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:44:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:44:01 compute-0 openstack_network_exporter[205720]: ERROR   21:44:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:44:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:44:03 compute-0 nova_compute[186018]: 2026-01-05 21:44:03.300 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:06 compute-0 nova_compute[186018]: 2026-01-05 21:44:06.273 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:07 compute-0 podman[257319]: 2026-01-05 21:44:07.764513038 +0000 UTC m=+0.092251893 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, version=9.6, vcs-type=git, architecture=x86_64, name=ubi9-minimal, url=https://catalog.redhat.com/en/search?searchType=containers, build-date=2025-08-20T13:12:41, release=1755695350, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, io.openshift.tags=minimal rhel9, maintainer=Red Hat, Inc., vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, distribution-scope=public, io.buildah.version=1.33.7, vendor=Red Hat, Inc., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI)
Jan 05 21:44:07 compute-0 podman[257318]: 2026-01-05 21:44:07.813150912 +0000 UTC m=+0.138582320 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']})
Jan 05 21:44:08 compute-0 nova_compute[186018]: 2026-01-05 21:44:08.305 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:11 compute-0 nova_compute[186018]: 2026-01-05 21:44:11.277 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:12 compute-0 podman[257364]: 2026-01-05 21:44:12.708006504 +0000 UTC m=+0.065966968 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_metadata_agent, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, config_id=ovn_metadata_agent)
Jan 05 21:44:12 compute-0 podman[257365]: 2026-01-05 21:44:12.759478581 +0000 UTC m=+0.110266047 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:44:13 compute-0 nova_compute[186018]: 2026-01-05 21:44:13.307 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:16 compute-0 nova_compute[186018]: 2026-01-05 21:44:16.280 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:18 compute-0 nova_compute[186018]: 2026-01-05 21:44:18.310 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:19 compute-0 podman[257407]: 2026-01-05 21:44:19.751048229 +0000 UTC m=+0.090344467 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:44:21 compute-0 nova_compute[186018]: 2026-01-05 21:44:21.284 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:23 compute-0 nova_compute[186018]: 2026-01-05 21:44:23.314 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:26 compute-0 nova_compute[186018]: 2026-01-05 21:44:26.286 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:26 compute-0 podman[257432]: 2026-01-05 21:44:26.80348867 +0000 UTC m=+0.126339184 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true)
Jan 05 21:44:26 compute-0 podman[257431]: 2026-01-05 21:44:26.819968489 +0000 UTC m=+0.149284151 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.openshift.tags=base rhel9, name=ubi9, vcs-type=git, com.redhat.component=ubi9-container, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, config_id=kepler, io.openshift.expose-services=, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release-0.7.12=, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., distribution-scope=public, io.buildah.version=1.29.0, architecture=x86_64, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible)
Jan 05 21:44:28 compute-0 nova_compute[186018]: 2026-01-05 21:44:28.317 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:29 compute-0 podman[202426]: time="2026-01-05T21:44:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:44:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:44:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:44:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:44:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4831 "" "Go-http-client/1.1"
Jan 05 21:44:30 compute-0 nova_compute[186018]: 2026-01-05 21:44:30.459 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:44:30 compute-0 nova_compute[186018]: 2026-01-05 21:44:30.460 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:44:30 compute-0 podman[257468]: 2026-01-05 21:44:30.763962974 +0000 UTC m=+0.112884213 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, config_id=ceilometer_agent_compute, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4)
Jan 05 21:44:31 compute-0 nova_compute[186018]: 2026-01-05 21:44:31.289 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:31 compute-0 openstack_network_exporter[205720]: ERROR   21:44:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:44:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:44:31 compute-0 openstack_network_exporter[205720]: ERROR   21:44:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:44:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:44:33 compute-0 nova_compute[186018]: 2026-01-05 21:44:33.321 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:33 compute-0 nova_compute[186018]: 2026-01-05 21:44:33.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:44:33 compute-0 nova_compute[186018]: 2026-01-05 21:44:33.503 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:44:33 compute-0 nova_compute[186018]: 2026-01-05 21:44:33.504 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:44:33 compute-0 nova_compute[186018]: 2026-01-05 21:44:33.504 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:44:33 compute-0 nova_compute[186018]: 2026-01-05 21:44:33.504 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:44:33 compute-0 nova_compute[186018]: 2026-01-05 21:44:33.655 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:44:33 compute-0 nova_compute[186018]: 2026-01-05 21:44:33.718 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:44:33 compute-0 nova_compute[186018]: 2026-01-05 21:44:33.721 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:44:33 compute-0 nova_compute[186018]: 2026-01-05 21:44:33.785 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:44:33 compute-0 nova_compute[186018]: 2026-01-05 21:44:33.792 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:44:33 compute-0 nova_compute[186018]: 2026-01-05 21:44:33.854 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:44:33 compute-0 nova_compute[186018]: 2026-01-05 21:44:33.856 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:44:33 compute-0 nova_compute[186018]: 2026-01-05 21:44:33.938 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:44:33 compute-0 nova_compute[186018]: 2026-01-05 21:44:33.949 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:44:34 compute-0 nova_compute[186018]: 2026-01-05 21:44:34.013 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:44:34 compute-0 nova_compute[186018]: 2026-01-05 21:44:34.014 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:44:34 compute-0 nova_compute[186018]: 2026-01-05 21:44:34.084 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.070s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:44:34 compute-0 nova_compute[186018]: 2026-01-05 21:44:34.467 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:44:34 compute-0 nova_compute[186018]: 2026-01-05 21:44:34.468 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4624MB free_disk=72.25718688964844GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:44:34 compute-0 nova_compute[186018]: 2026-01-05 21:44:34.468 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:44:34 compute-0 nova_compute[186018]: 2026-01-05 21:44:34.469 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:44:34 compute-0 nova_compute[186018]: 2026-01-05 21:44:34.549 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:44:34 compute-0 nova_compute[186018]: 2026-01-05 21:44:34.550 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance fe15eddf-ceea-4584-95df-dc1ea54e3c25 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:44:34 compute-0 nova_compute[186018]: 2026-01-05 21:44:34.550 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 66b489b4-d427-4eb3-b712-aa91b1410874 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:44:34 compute-0 nova_compute[186018]: 2026-01-05 21:44:34.551 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:44:34 compute-0 nova_compute[186018]: 2026-01-05 21:44:34.551 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:44:34 compute-0 nova_compute[186018]: 2026-01-05 21:44:34.627 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:44:34 compute-0 nova_compute[186018]: 2026-01-05 21:44:34.643 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:44:34 compute-0 nova_compute[186018]: 2026-01-05 21:44:34.645 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:44:34 compute-0 nova_compute[186018]: 2026-01-05 21:44:34.645 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.176s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:44:36 compute-0 nova_compute[186018]: 2026-01-05 21:44:36.293 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:37 compute-0 nova_compute[186018]: 2026-01-05 21:44:37.646 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:44:37 compute-0 nova_compute[186018]: 2026-01-05 21:44:37.647 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:44:37 compute-0 nova_compute[186018]: 2026-01-05 21:44:37.647 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:44:37 compute-0 nova_compute[186018]: 2026-01-05 21:44:37.881 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-66b489b4-d427-4eb3-b712-aa91b1410874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:44:37 compute-0 nova_compute[186018]: 2026-01-05 21:44:37.882 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-66b489b4-d427-4eb3-b712-aa91b1410874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:44:37 compute-0 nova_compute[186018]: 2026-01-05 21:44:37.882 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:44:38 compute-0 nova_compute[186018]: 2026-01-05 21:44:38.325 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:38 compute-0 podman[257512]: 2026-01-05 21:44:38.758988246 +0000 UTC m=+0.106646372 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, container_name=ovn_controller, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:44:38 compute-0 podman[257513]: 2026-01-05 21:44:38.760295084 +0000 UTC m=+0.092862251 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vendor=Red Hat, Inc., version=9.6, distribution-scope=public, release=1755695350, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, container_name=openstack_network_exporter, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, managed_by=edpm_ansible, io.openshift.tags=minimal rhel9, io.openshift.expose-services=, vcs-type=git, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.component=ubi9-minimal-container, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., io.buildah.version=1.33.7, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, name=ubi9-minimal, config_id=openstack_network_exporter, architecture=x86_64, build-date=2025-08-20T13:12:41, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc.)
Jan 05 21:44:38 compute-0 nova_compute[186018]: 2026-01-05 21:44:38.877 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Updating instance_info_cache with network_info: [{"id": "76d8404e-3237-44da-934d-3e7e8792c114", "address": "fa:16:3e:58:ee:ae", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d8404e-32", "ovs_interfaceid": "76d8404e-3237-44da-934d-3e7e8792c114", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:44:38 compute-0 nova_compute[186018]: 2026-01-05 21:44:38.903 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-66b489b4-d427-4eb3-b712-aa91b1410874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:44:38 compute-0 nova_compute[186018]: 2026-01-05 21:44:38.904 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:44:38 compute-0 nova_compute[186018]: 2026-01-05 21:44:38.904 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:44:41 compute-0 nova_compute[186018]: 2026-01-05 21:44:41.297 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:44:42.885 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:44:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:44:42.886 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:44:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:44:42.887 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:44:43 compute-0 nova_compute[186018]: 2026-01-05 21:44:43.327 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:43 compute-0 nova_compute[186018]: 2026-01-05 21:44:43.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:44:43 compute-0 nova_compute[186018]: 2026-01-05 21:44:43.490 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:44:43 compute-0 podman[257559]: 2026-01-05 21:44:43.738264103 +0000 UTC m=+0.093049166 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.build-date=20251202, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_managed=true, container_name=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:44:43 compute-0 podman[257560]: 2026-01-05 21:44:43.754503985 +0000 UTC m=+0.096549088 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter)
Jan 05 21:44:44 compute-0 nova_compute[186018]: 2026-01-05 21:44:44.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:44:44 compute-0 nova_compute[186018]: 2026-01-05 21:44:44.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:44:46 compute-0 nova_compute[186018]: 2026-01-05 21:44:46.300 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:48 compute-0 nova_compute[186018]: 2026-01-05 21:44:48.330 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:48 compute-0 nova_compute[186018]: 2026-01-05 21:44:48.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:44:50 compute-0 podman[257603]: 2026-01-05 21:44:50.729509341 +0000 UTC m=+0.077069052 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:44:51 compute-0 nova_compute[186018]: 2026-01-05 21:44:51.302 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:53 compute-0 nova_compute[186018]: 2026-01-05 21:44:53.333 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:56 compute-0 nova_compute[186018]: 2026-01-05 21:44:56.305 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:57 compute-0 podman[257628]: 2026-01-05 21:44:57.790467977 +0000 UTC m=+0.120551315 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, managed_by=edpm_ansible, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS)
Jan 05 21:44:57 compute-0 podman[257627]: 2026-01-05 21:44:57.796021629 +0000 UTC m=+0.126234551 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.display-name=Red Hat Universal Base Image 9, managed_by=edpm_ansible, vendor=Red Hat, Inc., com.redhat.component=ubi9-container, io.openshift.expose-services=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, architecture=x86_64, distribution-scope=public, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_id=kepler, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, maintainer=Red Hat, Inc., release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., build-date=2024-09-18T21:23:30, io.openshift.tags=base rhel9, container_name=kepler, name=ubi9, release-0.7.12=, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.29.0)
Jan 05 21:44:58 compute-0 nova_compute[186018]: 2026-01-05 21:44:58.335 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:44:59 compute-0 podman[202426]: time="2026-01-05T21:44:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:44:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:44:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:44:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:44:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4838 "" "Go-http-client/1.1"
Jan 05 21:45:01 compute-0 nova_compute[186018]: 2026-01-05 21:45:01.308 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:01 compute-0 openstack_network_exporter[205720]: ERROR   21:45:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:45:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:45:01 compute-0 openstack_network_exporter[205720]: ERROR   21:45:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:45:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:45:01 compute-0 podman[257666]: 2026-01-05 21:45:01.789700598 +0000 UTC m=+0.124227742 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 10 Base Image, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_compute, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true)
Jan 05 21:45:03 compute-0 nova_compute[186018]: 2026-01-05 21:45:03.339 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:06 compute-0 nova_compute[186018]: 2026-01-05 21:45:06.310 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.793 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.793 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.794 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163f7e7b30>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.799 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '66b489b4-d427-4eb3-b712-aa91b1410874', 'name': 'te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'be6cfe06-61ed-4c76-8e1d-bc9df6929005'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '0d77496083304392a3bddf3b3cc09d6f', 'user_id': '4adc8921daaf44d4b88d43bd5764da44', 'hostId': '3ca26c7ed0445332f9f9d5b660e6197db7ba063b9bde1e989d152df8', 'status': 'active', 'metadata': {'metering.server_group': '592ac083-4e5e-4ede-94dc-941b228764d4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.802 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '62f57876-af2d-4771-bffd-c87b7755cc5c', 'name': 'tempest-AttachInterfacesUnderV243Test-server-306597775', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e0899289c7dd4631b4fa69150a914123', 'user_id': '168ad639a6ed41c8bd954c434807ef6c', 'hostId': 'c3f8712f401137fbbdc6483d36c041bcfcf3dfa8c8dce0a58aba2f1b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.805 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fe15eddf-ceea-4584-95df-dc1ea54e3c25', 'name': 'te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'be6cfe06-61ed-4c76-8e1d-bc9df6929005'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '0d77496083304392a3bddf3b3cc09d6f', 'user_id': '4adc8921daaf44d4b88d43bd5764da44', 'hostId': '3ca26c7ed0445332f9f9d5b660e6197db7ba063b9bde1e989d152df8', 'status': 'active', 'metadata': {'metering.server_group': '592ac083-4e5e-4ede-94dc-941b228764d4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.806 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.806 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.806 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.806 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.807 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.807 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.807 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.807 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.807 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.807 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.807 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:45:07.806449) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.808 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:45:07.807776) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.812 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.816 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.822 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.823 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.823 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.823 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.824 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.824 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.824 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.824 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.825 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.826 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.827 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.827 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.827 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.827 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.827 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.828 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.828 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.829 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.829 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:45:07.824658) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.831 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.831 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.831 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.832 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.832 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.832 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:45:07.828214) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.832 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.834 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:45:07.832479) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.835 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.835 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.835 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.836 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.836 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.837 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.838 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.839 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.840 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.840 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.840 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.841 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.841 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.841 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.842 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.843 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:45:07.836137) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.843 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:45:07.841069) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.843 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.844 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.844 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.844 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.844 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.845 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.845 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.845 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.846 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:45:07.845311) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.846 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.846 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.847 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.847 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.848 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.848 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.848 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.848 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.849 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.849 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.850 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.850 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:45:07.848814) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.851 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.851 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.851 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.852 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.852 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.852 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.852 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:45:07.852476) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.874 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.874 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.891 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.891 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.907 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.907 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.908 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.908 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.909 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.909 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.909 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.910 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:45:07.909726) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.909 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.910 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.911 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.911 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.bytes.delta volume: 630 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.912 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.912 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.912 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.913 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.913 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.913 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.913 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.914 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:45:07.914128) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.914 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.915 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.915 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.915 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.916 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.916 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.916 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.917 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.917 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.917 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:45:07.917471) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.917 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.946 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/memory.usage volume: 42.5078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.964 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/memory.usage volume: 42.60546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.986 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/memory.usage volume: 42.75 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.988 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.988 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.988 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.989 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.989 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.989 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.990 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:45:07.989599) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.990 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.990 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.991 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.992 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.992 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.993 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.993 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.993 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.994 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.994 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:45:07.993902) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.994 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.995 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.995 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.996 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.997 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.997 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.998 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.999 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.999 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:45:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.999 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:07.999 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.000 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.000 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:45:08.000123) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.049 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.bytes volume: 30579200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.050 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.103 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 31029760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.104 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.144 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.bytes volume: 30808576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.145 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.145 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.145 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.145 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.145 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.146 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.146 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.bytes volume: 2060 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.146 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:45:08.146099) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.146 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes volume: 4311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.146 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.bytes volume: 2318 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.147 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.147 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.147 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.147 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.147 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.147 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.147 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:45:08.147582) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.147 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.latency volume: 496970419 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.148 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.latency volume: 60371496 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.148 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 519177861 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.148 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 51692234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.148 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.latency volume: 603913622 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.148 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.latency volume: 71189160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.149 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.149 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.149 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.149 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.149 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.149 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.149 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:45:08.149647) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.149 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.requests volume: 1106 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.150 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.150 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 1138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.150 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.151 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.requests volume: 1111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.151 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.151 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.151 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.151 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.151 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.152 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.152 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.152 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.152 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.152 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.152 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.153 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.153 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.153 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:45:08.152113) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.153 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.154 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.154 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.154 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.154 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.154 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.154 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.154 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.155 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 73068544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.155 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.155 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.bytes volume: 73170944 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.155 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.156 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.156 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.156 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:45:08.154533) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.156 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.156 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.156 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/cpu volume: 333890000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.156 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/cpu volume: 44390000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.157 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:45:08.156625) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.157 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/cpu volume: 337470000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.157 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.157 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.157 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.157 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.157 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.157 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.latency volume: 2761151049 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.158 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.158 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 13557622904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.158 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:45:08.157892) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.158 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.158 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.latency volume: 3937989191 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.159 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.159 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.159 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.159 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.159 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.159 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.160 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.160 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.requests volume: 344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.160 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:45:08.159989) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.160 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.161 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.161 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.requests volume: 339 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.161 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.162 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.162 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.162 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.162 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.162 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.162 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.162 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.162 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.162 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.162 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.164 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.164 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:45:08.164 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:45:08 compute-0 nova_compute[186018]: 2026-01-05 21:45:08.341 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:09 compute-0 podman[257688]: 2026-01-05 21:45:09.777074329 +0000 UTC m=+0.105398625 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., managed_by=edpm_ansible, io.openshift.expose-services=, name=ubi9-minimal, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, build-date=2025-08-20T13:12:41, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, container_name=openstack_network_exporter, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, architecture=x86_64, com.redhat.component=ubi9-minimal-container)
Jan 05 21:45:09 compute-0 podman[257687]: 2026-01-05 21:45:09.794040612 +0000 UTC m=+0.121400750 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true)
Jan 05 21:45:11 compute-0 nova_compute[186018]: 2026-01-05 21:45:11.312 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:13 compute-0 nova_compute[186018]: 2026-01-05 21:45:13.345 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:14 compute-0 podman[257732]: 2026-01-05 21:45:14.772001704 +0000 UTC m=+0.101724298 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:45:14 compute-0 podman[257731]: 2026-01-05 21:45:14.791710147 +0000 UTC m=+0.135794538 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202)
Jan 05 21:45:16 compute-0 nova_compute[186018]: 2026-01-05 21:45:16.316 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:18 compute-0 nova_compute[186018]: 2026-01-05 21:45:18.349 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:21 compute-0 nova_compute[186018]: 2026-01-05 21:45:21.319 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:21 compute-0 podman[257772]: 2026-01-05 21:45:21.715000889 +0000 UTC m=+0.065398372 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible)
Jan 05 21:45:23 compute-0 nova_compute[186018]: 2026-01-05 21:45:23.352 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:26 compute-0 nova_compute[186018]: 2026-01-05 21:45:26.322 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:28 compute-0 nova_compute[186018]: 2026-01-05 21:45:28.356 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:28 compute-0 podman[257797]: 2026-01-05 21:45:28.753975936 +0000 UTC m=+0.095595360 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.build-date=20251202, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ceilometer_agent_ipmi, container_name=ceilometer_agent_ipmi)
Jan 05 21:45:28 compute-0 podman[257796]: 2026-01-05 21:45:28.756633293 +0000 UTC m=+0.107663021 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, summary=Provides the latest release of Red Hat Universal Base Image 9., version=9.4, architecture=x86_64, release-0.7.12=, build-date=2024-09-18T21:23:30, com.redhat.component=ubi9-container, distribution-scope=public, io.openshift.tags=base rhel9, io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, io.buildah.version=1.29.0, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1214.1726694543, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, config_id=kepler, io.openshift.expose-services=, container_name=kepler, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9)
Jan 05 21:45:29 compute-0 podman[202426]: time="2026-01-05T21:45:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:45:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:45:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:45:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:45:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4837 "" "Go-http-client/1.1"
Jan 05 21:45:31 compute-0 nova_compute[186018]: 2026-01-05 21:45:31.326 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:31 compute-0 openstack_network_exporter[205720]: ERROR   21:45:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:45:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:45:31 compute-0 openstack_network_exporter[205720]: ERROR   21:45:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:45:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:45:32 compute-0 nova_compute[186018]: 2026-01-05 21:45:32.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:45:32 compute-0 nova_compute[186018]: 2026-01-05 21:45:32.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:45:32 compute-0 podman[257834]: 2026-01-05 21:45:32.765463763 +0000 UTC m=+0.102854711 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251224, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute)
Jan 05 21:45:33 compute-0 nova_compute[186018]: 2026-01-05 21:45:33.357 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:34 compute-0 nova_compute[186018]: 2026-01-05 21:45:34.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:45:34 compute-0 nova_compute[186018]: 2026-01-05 21:45:34.499 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:45:34 compute-0 nova_compute[186018]: 2026-01-05 21:45:34.500 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:45:34 compute-0 nova_compute[186018]: 2026-01-05 21:45:34.501 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:45:34 compute-0 nova_compute[186018]: 2026-01-05 21:45:34.502 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:45:34 compute-0 nova_compute[186018]: 2026-01-05 21:45:34.610 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:45:34 compute-0 nova_compute[186018]: 2026-01-05 21:45:34.710 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:45:34 compute-0 nova_compute[186018]: 2026-01-05 21:45:34.712 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:45:34 compute-0 nova_compute[186018]: 2026-01-05 21:45:34.772 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:45:34 compute-0 nova_compute[186018]: 2026-01-05 21:45:34.779 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:45:34 compute-0 nova_compute[186018]: 2026-01-05 21:45:34.837 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:45:34 compute-0 nova_compute[186018]: 2026-01-05 21:45:34.839 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:45:34 compute-0 nova_compute[186018]: 2026-01-05 21:45:34.901 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:45:34 compute-0 nova_compute[186018]: 2026-01-05 21:45:34.912 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:45:34 compute-0 nova_compute[186018]: 2026-01-05 21:45:34.976 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:45:34 compute-0 nova_compute[186018]: 2026-01-05 21:45:34.979 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:45:35 compute-0 nova_compute[186018]: 2026-01-05 21:45:35.060 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.082s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:45:35 compute-0 nova_compute[186018]: 2026-01-05 21:45:35.482 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:45:35 compute-0 nova_compute[186018]: 2026-01-05 21:45:35.483 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4605MB free_disk=72.25718688964844GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:45:35 compute-0 nova_compute[186018]: 2026-01-05 21:45:35.484 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:45:35 compute-0 nova_compute[186018]: 2026-01-05 21:45:35.485 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:45:35 compute-0 nova_compute[186018]: 2026-01-05 21:45:35.567 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:45:35 compute-0 nova_compute[186018]: 2026-01-05 21:45:35.567 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance fe15eddf-ceea-4584-95df-dc1ea54e3c25 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:45:35 compute-0 nova_compute[186018]: 2026-01-05 21:45:35.567 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 66b489b4-d427-4eb3-b712-aa91b1410874 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:45:35 compute-0 nova_compute[186018]: 2026-01-05 21:45:35.568 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:45:35 compute-0 nova_compute[186018]: 2026-01-05 21:45:35.568 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:45:35 compute-0 nova_compute[186018]: 2026-01-05 21:45:35.651 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:45:35 compute-0 nova_compute[186018]: 2026-01-05 21:45:35.665 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:45:35 compute-0 nova_compute[186018]: 2026-01-05 21:45:35.667 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:45:35 compute-0 nova_compute[186018]: 2026-01-05 21:45:35.667 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.182s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:45:36 compute-0 nova_compute[186018]: 2026-01-05 21:45:36.330 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:38 compute-0 nova_compute[186018]: 2026-01-05 21:45:38.360 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:38 compute-0 nova_compute[186018]: 2026-01-05 21:45:38.668 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:45:38 compute-0 nova_compute[186018]: 2026-01-05 21:45:38.668 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:45:38 compute-0 nova_compute[186018]: 2026-01-05 21:45:38.668 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:45:38 compute-0 nova_compute[186018]: 2026-01-05 21:45:38.669 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:45:38 compute-0 nova_compute[186018]: 2026-01-05 21:45:38.923 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:45:38 compute-0 nova_compute[186018]: 2026-01-05 21:45:38.923 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:45:38 compute-0 nova_compute[186018]: 2026-01-05 21:45:38.923 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:45:38 compute-0 nova_compute[186018]: 2026-01-05 21:45:38.924 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62f57876-af2d-4771-bffd-c87b7755cc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:45:40 compute-0 nova_compute[186018]: 2026-01-05 21:45:40.213 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updating instance_info_cache with network_info: [{"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:45:40 compute-0 nova_compute[186018]: 2026-01-05 21:45:40.233 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:45:40 compute-0 nova_compute[186018]: 2026-01-05 21:45:40.234 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:45:40 compute-0 nova_compute[186018]: 2026-01-05 21:45:40.235 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:45:40 compute-0 podman[257873]: 2026-01-05 21:45:40.756406648 +0000 UTC m=+0.097398243 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.expose-services=, vendor=Red Hat, Inc., description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, io.openshift.tags=minimal rhel9, com.redhat.component=ubi9-minimal-container, vcs-type=git, build-date=2025-08-20T13:12:41, architecture=x86_64, container_name=openstack_network_exporter, config_id=openstack_network_exporter, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, distribution-scope=public, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., name=ubi9-minimal, release=1755695350, managed_by=edpm_ansible, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 05 21:45:40 compute-0 podman[257872]: 2026-01-05 21:45:40.808871803 +0000 UTC m=+0.151275759 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_controller, container_name=ovn_controller, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, tcib_managed=true)
Jan 05 21:45:41 compute-0 nova_compute[186018]: 2026-01-05 21:45:41.332 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:45:42.887 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:45:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:45:42.888 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:45:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:45:42.888 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:45:43 compute-0 nova_compute[186018]: 2026-01-05 21:45:43.361 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:44 compute-0 nova_compute[186018]: 2026-01-05 21:45:44.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:45:44 compute-0 nova_compute[186018]: 2026-01-05 21:45:44.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:45:45 compute-0 podman[257919]: 2026-01-05 21:45:45.742850187 +0000 UTC m=+0.078740960 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']})
Jan 05 21:45:45 compute-0 podman[257918]: 2026-01-05 21:45:45.755075742 +0000 UTC m=+0.093928421 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_managed=true, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, managed_by=edpm_ansible, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:45:46 compute-0 nova_compute[186018]: 2026-01-05 21:45:46.335 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:46 compute-0 nova_compute[186018]: 2026-01-05 21:45:46.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:45:48 compute-0 nova_compute[186018]: 2026-01-05 21:45:48.365 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:48 compute-0 nova_compute[186018]: 2026-01-05 21:45:48.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:45:51 compute-0 nova_compute[186018]: 2026-01-05 21:45:51.338 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:52 compute-0 podman[257958]: 2026-01-05 21:45:52.736186071 +0000 UTC m=+0.075469095 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:45:53 compute-0 nova_compute[186018]: 2026-01-05 21:45:53.367 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:56 compute-0 nova_compute[186018]: 2026-01-05 21:45:56.340 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:58 compute-0 nova_compute[186018]: 2026-01-05 21:45:58.369 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:45:59 compute-0 podman[257984]: 2026-01-05 21:45:59.724662607 +0000 UTC m=+0.068466041 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 05 21:45:59 compute-0 podman[257983]: 2026-01-05 21:45:59.725398999 +0000 UTC m=+0.073700774 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, container_name=kepler, io.openshift.tags=base rhel9, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, name=ubi9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vendor=Red Hat, Inc., version=9.4, architecture=x86_64, com.redhat.component=ubi9-container, managed_by=edpm_ansible, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9, release=1214.1726694543)
Jan 05 21:45:59 compute-0 podman[202426]: time="2026-01-05T21:45:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:45:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:45:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:45:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:45:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4830 "" "Go-http-client/1.1"
Jan 05 21:46:01 compute-0 nova_compute[186018]: 2026-01-05 21:46:01.342 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:01 compute-0 openstack_network_exporter[205720]: ERROR   21:46:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:46:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:46:01 compute-0 openstack_network_exporter[205720]: ERROR   21:46:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:46:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:46:03 compute-0 nova_compute[186018]: 2026-01-05 21:46:03.372 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:03 compute-0 podman[258022]: 2026-01-05 21:46:03.759427482 +0000 UTC m=+0.106415845 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2)
Jan 05 21:46:06 compute-0 nova_compute[186018]: 2026-01-05 21:46:06.348 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:08 compute-0 nova_compute[186018]: 2026-01-05 21:46:08.375 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:11 compute-0 nova_compute[186018]: 2026-01-05 21:46:11.352 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:11 compute-0 podman[258043]: 2026-01-05 21:46:11.798535876 +0000 UTC m=+0.129995430 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., vcs-type=git, version=9.6, com.redhat.component=ubi9-minimal-container, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, distribution-scope=public, build-date=2025-08-20T13:12:41, io.buildah.version=1.33.7, managed_by=edpm_ansible, architecture=x86_64, config_id=openstack_network_exporter, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:46:11 compute-0 podman[258042]: 2026-01-05 21:46:11.818723043 +0000 UTC m=+0.166858462 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, managed_by=edpm_ansible, org.label-schema.vendor=CentOS)
Jan 05 21:46:13 compute-0 nova_compute[186018]: 2026-01-05 21:46:13.378 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:16 compute-0 nova_compute[186018]: 2026-01-05 21:46:16.356 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:16 compute-0 podman[258088]: 2026-01-05 21:46:16.739841134 +0000 UTC m=+0.076396392 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, maintainer=OpenStack Kubernetes Operator team, container_name=ovn_metadata_agent, org.label-schema.vendor=CentOS, tcib_managed=true, config_id=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 21:46:16 compute-0 podman[258089]: 2026-01-05 21:46:16.762493262 +0000 UTC m=+0.094363204 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:46:18 compute-0 nova_compute[186018]: 2026-01-05 21:46:18.380 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:21 compute-0 nova_compute[186018]: 2026-01-05 21:46:21.360 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:23 compute-0 nova_compute[186018]: 2026-01-05 21:46:23.382 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:23 compute-0 podman[258129]: 2026-01-05 21:46:23.766581426 +0000 UTC m=+0.100905854 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 21:46:26 compute-0 nova_compute[186018]: 2026-01-05 21:46:26.364 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:28 compute-0 nova_compute[186018]: 2026-01-05 21:46:28.384 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:29 compute-0 podman[202426]: time="2026-01-05T21:46:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:46:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:46:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:46:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:46:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4844 "" "Go-http-client/1.1"
Jan 05 21:46:30 compute-0 podman[258155]: 2026-01-05 21:46:30.73981178 +0000 UTC m=+0.077314549 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi)
Jan 05 21:46:30 compute-0 podman[258154]: 2026-01-05 21:46:30.76629816 +0000 UTC m=+0.107608680 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, build-date=2024-09-18T21:23:30, maintainer=Red Hat, Inc., name=ubi9, vendor=Red Hat, Inc., com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, version=9.4, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9, vcs-type=git, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., architecture=x86_64, config_id=kepler, release=1214.1726694543, summary=Provides the latest release of Red Hat Universal Base Image 9., io.openshift.expose-services=, container_name=kepler, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, managed_by=edpm_ansible, io.openshift.tags=base rhel9, distribution-scope=public, com.redhat.component=ubi9-container)
Jan 05 21:46:31 compute-0 nova_compute[186018]: 2026-01-05 21:46:31.366 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:31 compute-0 openstack_network_exporter[205720]: ERROR   21:46:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:46:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:46:31 compute-0 openstack_network_exporter[205720]: ERROR   21:46:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:46:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:46:33 compute-0 nova_compute[186018]: 2026-01-05 21:46:33.387 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:34 compute-0 nova_compute[186018]: 2026-01-05 21:46:34.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:46:34 compute-0 nova_compute[186018]: 2026-01-05 21:46:34.463 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:46:34 compute-0 nova_compute[186018]: 2026-01-05 21:46:34.464 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:46:34 compute-0 nova_compute[186018]: 2026-01-05 21:46:34.491 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:46:34 compute-0 nova_compute[186018]: 2026-01-05 21:46:34.491 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:46:34 compute-0 nova_compute[186018]: 2026-01-05 21:46:34.492 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:46:34 compute-0 nova_compute[186018]: 2026-01-05 21:46:34.492 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:46:34 compute-0 nova_compute[186018]: 2026-01-05 21:46:34.599 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:46:34 compute-0 nova_compute[186018]: 2026-01-05 21:46:34.698 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.098s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:46:34 compute-0 nova_compute[186018]: 2026-01-05 21:46:34.699 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:46:34 compute-0 podman[258190]: 2026-01-05 21:46:34.756197308 +0000 UTC m=+0.100756671 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251224, container_name=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.vendor=CentOS, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 05 21:46:34 compute-0 nova_compute[186018]: 2026-01-05 21:46:34.762 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.063s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:46:34 compute-0 nova_compute[186018]: 2026-01-05 21:46:34.773 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:46:34 compute-0 nova_compute[186018]: 2026-01-05 21:46:34.835 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.062s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:46:34 compute-0 nova_compute[186018]: 2026-01-05 21:46:34.836 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:46:34 compute-0 nova_compute[186018]: 2026-01-05 21:46:34.931 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.094s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:46:34 compute-0 nova_compute[186018]: 2026-01-05 21:46:34.941 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:46:35 compute-0 nova_compute[186018]: 2026-01-05 21:46:35.005 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.064s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:46:35 compute-0 nova_compute[186018]: 2026-01-05 21:46:35.006 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:46:35 compute-0 nova_compute[186018]: 2026-01-05 21:46:35.076 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:46:35 compute-0 nova_compute[186018]: 2026-01-05 21:46:35.474 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:46:35 compute-0 nova_compute[186018]: 2026-01-05 21:46:35.475 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4601MB free_disk=72.25718688964844GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:46:35 compute-0 nova_compute[186018]: 2026-01-05 21:46:35.475 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:46:35 compute-0 nova_compute[186018]: 2026-01-05 21:46:35.476 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:46:35 compute-0 nova_compute[186018]: 2026-01-05 21:46:35.555 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:46:35 compute-0 nova_compute[186018]: 2026-01-05 21:46:35.555 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance fe15eddf-ceea-4584-95df-dc1ea54e3c25 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:46:35 compute-0 nova_compute[186018]: 2026-01-05 21:46:35.555 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 66b489b4-d427-4eb3-b712-aa91b1410874 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:46:35 compute-0 nova_compute[186018]: 2026-01-05 21:46:35.555 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:46:35 compute-0 nova_compute[186018]: 2026-01-05 21:46:35.555 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:46:35 compute-0 nova_compute[186018]: 2026-01-05 21:46:35.623 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:46:35 compute-0 nova_compute[186018]: 2026-01-05 21:46:35.637 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:46:35 compute-0 nova_compute[186018]: 2026-01-05 21:46:35.638 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:46:35 compute-0 nova_compute[186018]: 2026-01-05 21:46:35.639 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.163s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:46:36 compute-0 nova_compute[186018]: 2026-01-05 21:46:36.369 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:38 compute-0 nova_compute[186018]: 2026-01-05 21:46:38.390 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:39 compute-0 nova_compute[186018]: 2026-01-05 21:46:39.633 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:46:39 compute-0 nova_compute[186018]: 2026-01-05 21:46:39.634 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:46:39 compute-0 nova_compute[186018]: 2026-01-05 21:46:39.635 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:46:39 compute-0 nova_compute[186018]: 2026-01-05 21:46:39.958 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:46:39 compute-0 nova_compute[186018]: 2026-01-05 21:46:39.959 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:46:39 compute-0 nova_compute[186018]: 2026-01-05 21:46:39.959 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:46:40 compute-0 nova_compute[186018]: 2026-01-05 21:46:40.982 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Updating instance_info_cache with network_info: [{"id": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "address": "fa:16:3e:f6:00:12", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd05ce4e7-0f", "ovs_interfaceid": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:46:41 compute-0 nova_compute[186018]: 2026-01-05 21:46:41.004 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-fe15eddf-ceea-4584-95df-dc1ea54e3c25" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:46:41 compute-0 nova_compute[186018]: 2026-01-05 21:46:41.005 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:46:41 compute-0 nova_compute[186018]: 2026-01-05 21:46:41.371 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:41 compute-0 nova_compute[186018]: 2026-01-05 21:46:41.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:46:42 compute-0 podman[258227]: 2026-01-05 21:46:42.760854719 +0000 UTC m=+0.099233936 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, container_name=openstack_network_exporter, build-date=2025-08-20T13:12:41, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.expose-services=, release=1755695350, version=9.6, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, maintainer=Red Hat, Inc., io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., com.redhat.component=ubi9-minimal-container, architecture=x86_64, name=ubi9-minimal, managed_by=edpm_ansible)
Jan 05 21:46:42 compute-0 podman[258226]: 2026-01-05 21:46:42.801325025 +0000 UTC m=+0.147295933 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, tcib_managed=true, config_id=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.license=GPLv2)
Jan 05 21:46:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:46:42.888 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:46:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:46:42.888 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:46:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:46:42.889 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:46:43 compute-0 nova_compute[186018]: 2026-01-05 21:46:43.393 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:43 compute-0 nova_compute[186018]: 2026-01-05 21:46:43.455 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:46:45 compute-0 nova_compute[186018]: 2026-01-05 21:46:45.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:46:45 compute-0 nova_compute[186018]: 2026-01-05 21:46:45.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:46:46 compute-0 nova_compute[186018]: 2026-01-05 21:46:46.374 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:47 compute-0 nova_compute[186018]: 2026-01-05 21:46:47.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:46:47 compute-0 podman[258269]: 2026-01-05 21:46:47.71671078 +0000 UTC m=+0.072709874 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3)
Jan 05 21:46:47 compute-0 podman[258270]: 2026-01-05 21:46:47.736203997 +0000 UTC m=+0.090696218 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:46:48 compute-0 nova_compute[186018]: 2026-01-05 21:46:48.396 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:49 compute-0 nova_compute[186018]: 2026-01-05 21:46:49.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:46:50 compute-0 nova_compute[186018]: 2026-01-05 21:46:50.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_expired_console_auth_tokens run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:46:51 compute-0 nova_compute[186018]: 2026-01-05 21:46:51.377 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:53 compute-0 nova_compute[186018]: 2026-01-05 21:46:53.399 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:54 compute-0 podman[258312]: 2026-01-05 21:46:54.761678313 +0000 UTC m=+0.095331072 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 21:46:56 compute-0 nova_compute[186018]: 2026-01-05 21:46:56.381 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:58 compute-0 nova_compute[186018]: 2026-01-05 21:46:58.402 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:46:59 compute-0 nova_compute[186018]: 2026-01-05 21:46:59.476 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._run_pending_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:46:59 compute-0 nova_compute[186018]: 2026-01-05 21:46:59.477 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11145
Jan 05 21:46:59 compute-0 nova_compute[186018]: 2026-01-05 21:46:59.501 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] There are 0 instances to clean _run_pending_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:11154
Jan 05 21:46:59 compute-0 podman[202426]: time="2026-01-05T21:46:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:46:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:46:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:46:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:46:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4845 "" "Go-http-client/1.1"
Jan 05 21:47:00 compute-0 nova_compute[186018]: 2026-01-05 21:47:00.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._cleanup_incomplete_migrations run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:47:00 compute-0 nova_compute[186018]: 2026-01-05 21:47:00.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Cleaning up deleted instances with incomplete migration  _cleanup_incomplete_migrations /usr/lib/python3.9/site-packages/nova/compute/manager.py:11183
Jan 05 21:47:01 compute-0 nova_compute[186018]: 2026-01-05 21:47:01.385 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:01 compute-0 openstack_network_exporter[205720]: ERROR   21:47:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:47:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:47:01 compute-0 openstack_network_exporter[205720]: ERROR   21:47:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:47:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:47:01 compute-0 podman[258335]: 2026-01-05 21:47:01.747853151 +0000 UTC m=+0.098069452 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, release=1214.1726694543, architecture=x86_64, config_id=kepler, io.buildah.version=1.29.0, io.openshift.expose-services=, version=9.4, release-0.7.12=, vendor=Red Hat, Inc., io.openshift.tags=base rhel9, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, managed_by=edpm_ansible, com.redhat.component=ubi9-container, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-type=git, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, container_name=kepler)
Jan 05 21:47:01 compute-0 podman[258336]: 2026-01-05 21:47:01.759595482 +0000 UTC m=+0.103435838 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:47:02 compute-0 nova_compute[186018]: 2026-01-05 21:47:02.059 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_power_states run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:47:02 compute-0 nova_compute[186018]: 2026-01-05 21:47:02.094 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Triggering sync for uuid 62f57876-af2d-4771-bffd-c87b7755cc5c _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 05 21:47:02 compute-0 nova_compute[186018]: 2026-01-05 21:47:02.094 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Triggering sync for uuid fe15eddf-ceea-4584-95df-dc1ea54e3c25 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 05 21:47:02 compute-0 nova_compute[186018]: 2026-01-05 21:47:02.095 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Triggering sync for uuid 66b489b4-d427-4eb3-b712-aa91b1410874 _sync_power_states /usr/lib/python3.9/site-packages/nova/compute/manager.py:10268
Jan 05 21:47:02 compute-0 nova_compute[186018]: 2026-01-05 21:47:02.095 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "62f57876-af2d-4771-bffd-c87b7755cc5c" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:47:02 compute-0 nova_compute[186018]: 2026-01-05 21:47:02.096 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "62f57876-af2d-4771-bffd-c87b7755cc5c" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:47:02 compute-0 nova_compute[186018]: 2026-01-05 21:47:02.096 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:47:02 compute-0 nova_compute[186018]: 2026-01-05 21:47:02.097 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:47:02 compute-0 nova_compute[186018]: 2026-01-05 21:47:02.097 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "66b489b4-d427-4eb3-b712-aa91b1410874" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:47:02 compute-0 nova_compute[186018]: 2026-01-05 21:47:02.098 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "66b489b4-d427-4eb3-b712-aa91b1410874" acquired by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:47:02 compute-0 nova_compute[186018]: 2026-01-05 21:47:02.130 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "62f57876-af2d-4771-bffd-c87b7755cc5c" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.034s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:47:02 compute-0 nova_compute[186018]: 2026-01-05 21:47:02.133 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.037s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:47:02 compute-0 nova_compute[186018]: 2026-01-05 21:47:02.136 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "66b489b4-d427-4eb3-b712-aa91b1410874" "released" by "nova.compute.manager.ComputeManager._sync_power_states.<locals>._sync.<locals>.query_driver_power_state_and_sync" :: held 0.038s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:47:03 compute-0 nova_compute[186018]: 2026-01-05 21:47:03.405 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:05 compute-0 podman[258374]: 2026-01-05 21:47:05.738997765 +0000 UTC m=+0.084469987 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.license=GPLv2, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, maintainer=OpenStack Kubernetes Operator team, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']})
Jan 05 21:47:06 compute-0 nova_compute[186018]: 2026-01-05 21:47:06.387 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.794 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.794 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.794 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.795 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.796 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163c61bdd0>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.799 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '66b489b4-d427-4eb3-b712-aa91b1410874', 'name': 'te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'be6cfe06-61ed-4c76-8e1d-bc9df6929005'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000e', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '0d77496083304392a3bddf3b3cc09d6f', 'user_id': '4adc8921daaf44d4b88d43bd5764da44', 'hostId': '3ca26c7ed0445332f9f9d5b660e6197db7ba063b9bde1e989d152df8', 'status': 'active', 'metadata': {'metering.server_group': '592ac083-4e5e-4ede-94dc-941b228764d4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.808 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '62f57876-af2d-4771-bffd-c87b7755cc5c', 'name': 'tempest-AttachInterfacesUnderV243Test-server-306597775', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e0899289c7dd4631b4fa69150a914123', 'user_id': '168ad639a6ed41c8bd954c434807ef6c', 'hostId': 'c3f8712f401137fbbdc6483d36c041bcfcf3dfa8c8dce0a58aba2f1b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.812 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': 'fe15eddf-ceea-4584-95df-dc1ea54e3c25', 'name': 'te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'be6cfe06-61ed-4c76-8e1d-bc9df6929005'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-0000000b', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': '0d77496083304392a3bddf3b3cc09d6f', 'user_id': '4adc8921daaf44d4b88d43bd5764da44', 'hostId': '3ca26c7ed0445332f9f9d5b660e6197db7ba063b9bde1e989d152df8', 'status': 'active', 'metadata': {'metering.server_group': '592ac083-4e5e-4ede-94dc-941b228764d4'}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.812 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.813 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.813 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.813 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.815 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.815 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.815 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.816 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.816 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.816 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:47:07.813621) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.816 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.816 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:47:07.816463) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.821 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.packets volume: 27 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.824 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.827 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets volume: 32 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.828 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.828 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.828 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.828 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.828 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.828 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.829 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.829 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.829 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.830 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:47:07.828797) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.830 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.830 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.830 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.830 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.830 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.831 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.831 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets volume: 31 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.831 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:47:07.830759) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.831 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.831 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.831 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.832 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.832 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.832 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.832 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:47:07.832180) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.833 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.833 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.833 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.833 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.833 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.833 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.833 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.833 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.834 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.bytes volume: 2250 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.834 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.834 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.834 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.834 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.834 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.834 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.834 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:47:07.833480) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.835 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:47:07.834833) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.835 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.835 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.836 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.836 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.836 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.836 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.836 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.836 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.836 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.837 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.837 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.837 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.837 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.838 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.838 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.838 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:47:07.836745) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.838 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.838 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.838 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.838 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.838 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.839 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.839 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:47:07.838371) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.839 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.839 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.840 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.840 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.840 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:47:07.840087) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.857 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.allocation volume: 31006720 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.858 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.882 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.883 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.900 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.allocation volume: 30744576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.901 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.902 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.902 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.903 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.903 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.903 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.904 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.904 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.904 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.905 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.906 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.906 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.906 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.906 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.906 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.907 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.907 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.907 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.907 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.908 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.908 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.909 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.910 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.910 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.910 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.910 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.910 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.911 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:47:07.903998) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.911 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:47:07.907531) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.911 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:47:07.910820) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.938 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/memory.usage volume: 42.5078125 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:07.972 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/memory.usage volume: 42.60546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.002 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/memory.usage volume: 42.53515625 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.003 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.003 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.003 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.003 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.003 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.004 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.004 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.004 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.004 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.005 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.005 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.005 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.006 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.006 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.006 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.006 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:47:08.004028) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.006 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.007 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.007 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.007 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.008 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.008 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.009 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.009 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.009 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.009 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.009 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.009 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:47:08.006778) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.010 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:47:08.009887) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.060 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.bytes volume: 30579200 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.061 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.106 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 31029760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.106 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.145 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.bytes volume: 30808576 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.146 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.146 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.146 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.146 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.146 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.146 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.147 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.147 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/network.incoming.bytes volume: 2060 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.147 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes volume: 4311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.147 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/network.incoming.bytes volume: 2318 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.147 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.148 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.148 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.148 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.148 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.148 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.148 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.latency volume: 496970419 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.148 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.latency volume: 60371496 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.148 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 519177861 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.149 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 51692234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.149 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.latency volume: 603913622 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.149 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.latency volume: 71189160 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.149 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.150 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.150 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.150 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.150 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.150 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.150 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.requests volume: 1106 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.150 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.151 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 1138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.151 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:47:08.147028) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.151 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:47:08.148405) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.151 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.151 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:47:08.150473) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.151 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.requests volume: 1111 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.151 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.152 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.152 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.152 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.152 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.152 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.152 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.152 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.152 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.152 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:47:08.152619) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.153 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.153 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.153 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.usage volume: 30015488 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.153 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.154 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.154 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.154 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.154 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.154 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.154 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.154 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.bytes volume: 73162752 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.154 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.155 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 73068544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.155 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.155 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:47:08.154500) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.155 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.bytes volume: 73170944 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.156 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.156 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.156 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.156 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.156 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.156 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.156 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.157 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/cpu volume: 335550000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.157 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:47:08.156950) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.157 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/cpu volume: 46000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.157 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/cpu volume: 339000000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.157 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.157 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.158 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.158 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.158 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.158 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.158 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.latency volume: 2761151049 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.158 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.158 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 13557622904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.158 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.159 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.latency volume: 3937989191 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.159 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.159 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.159 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.160 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.160 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.160 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.160 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:47:08.158297) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.160 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.requests volume: 344 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.160 14 DEBUG ceilometer.compute.pollsters [-] 66b489b4-d427-4eb3-b712-aa91b1410874/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.160 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:47:08.160445) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.161 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.161 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.161 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.requests volume: 339 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.161 14 DEBUG ceilometer.compute.pollsters [-] fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.161 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.162 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.162 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.162 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.162 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.162 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.163 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.164 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.164 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.164 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.164 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.164 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.165 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.166 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.166 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.166 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.166 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.166 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:47:08.167 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:47:08 compute-0 nova_compute[186018]: 2026-01-05 21:47:08.406 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:11 compute-0 nova_compute[186018]: 2026-01-05 21:47:11.390 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:13 compute-0 nova_compute[186018]: 2026-01-05 21:47:13.409 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:13 compute-0 podman[258393]: 2026-01-05 21:47:13.801534417 +0000 UTC m=+0.140689203 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, config_id=ovn_controller, container_name=ovn_controller, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:47:13 compute-0 podman[258394]: 2026-01-05 21:47:13.810852449 +0000 UTC m=+0.128409304 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Red Hat, Inc., url=https://catalog.redhat.com/en/search?searchType=containers, io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, io.openshift.expose-services=, com.redhat.component=ubi9-minimal-container, release=1755695350, distribution-scope=public, container_name=openstack_network_exporter, architecture=x86_64, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=openstack_network_exporter, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vcs-type=git, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.buildah.version=1.33.7, name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vendor=Red Hat, Inc., version=9.6, managed_by=edpm_ansible, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal)
Jan 05 21:47:16 compute-0 nova_compute[186018]: 2026-01-05 21:47:16.394 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:18 compute-0 nova_compute[186018]: 2026-01-05 21:47:18.412 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:18 compute-0 podman[258439]: 2026-01-05 21:47:18.740279384 +0000 UTC m=+0.084086519 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:47:18 compute-0 podman[258438]: 2026-01-05 21:47:18.758871947 +0000 UTC m=+0.097336016 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, managed_by=edpm_ansible, org.label-schema.build-date=20251202, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0)
Jan 05 21:47:21 compute-0 nova_compute[186018]: 2026-01-05 21:47:21.396 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:23 compute-0 nova_compute[186018]: 2026-01-05 21:47:23.415 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:25 compute-0 podman[258477]: 2026-01-05 21:47:25.75750113 +0000 UTC m=+0.111538731 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:47:26 compute-0 nova_compute[186018]: 2026-01-05 21:47:26.399 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:28 compute-0 nova_compute[186018]: 2026-01-05 21:47:28.417 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:29 compute-0 podman[202426]: time="2026-01-05T21:47:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:47:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:47:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:47:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:47:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4833 "" "Go-http-client/1.1"
Jan 05 21:47:31 compute-0 nova_compute[186018]: 2026-01-05 21:47:31.402 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:31 compute-0 openstack_network_exporter[205720]: ERROR   21:47:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:47:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:47:31 compute-0 openstack_network_exporter[205720]: ERROR   21:47:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:47:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:47:32 compute-0 podman[258502]: 2026-01-05 21:47:32.765491942 +0000 UTC m=+0.104326930 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, container_name=kepler, vendor=Red Hat, Inc., config_id=kepler, io.buildah.version=1.29.0, io.k8s.display-name=Red Hat Universal Base Image 9, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release-0.7.12=, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, distribution-scope=public, com.redhat.component=ubi9-container, managed_by=edpm_ansible, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., summary=Provides the latest release of Red Hat Universal Base Image 9., name=ubi9, io.openshift.expose-services=, architecture=x86_64, io.openshift.tags=base rhel9, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, vcs-type=git, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.4, release=1214.1726694543, maintainer=Red Hat, Inc., build-date=2024-09-18T21:23:30)
Jan 05 21:47:32 compute-0 podman[258503]: 2026-01-05 21:47:32.810751715 +0000 UTC m=+0.144291598 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi)
Jan 05 21:47:33 compute-0 nova_compute[186018]: 2026-01-05 21:47:33.421 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:34 compute-0 nova_compute[186018]: 2026-01-05 21:47:34.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:47:34 compute-0 nova_compute[186018]: 2026-01-05 21:47:34.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.405 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.489 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.490 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.491 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.492 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.579 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.639 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.060s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.640 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.728 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874/disk --force-share --output=json" returned: 0 in 0.088s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.735 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:47:36 compute-0 podman[258540]: 2026-01-05 21:47:36.735819379 +0000 UTC m=+0.087424586 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, tcib_managed=true, config_id=ceilometer_agent_compute, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.793 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.058s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.794 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.856 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.061s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.863 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.918 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.056s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:47:36 compute-0 nova_compute[186018]: 2026-01-05 21:47:36.919 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:47:37 compute-0 nova_compute[186018]: 2026-01-05 21:47:37.023 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25/disk --force-share --output=json" returned: 0 in 0.103s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:47:37 compute-0 nova_compute[186018]: 2026-01-05 21:47:37.385 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:47:37 compute-0 nova_compute[186018]: 2026-01-05 21:47:37.387 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=4609MB free_disk=72.25718688964844GB free_vcpus=5 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:47:37 compute-0 nova_compute[186018]: 2026-01-05 21:47:37.387 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:47:37 compute-0 nova_compute[186018]: 2026-01-05 21:47:37.387 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:47:37 compute-0 nova_compute[186018]: 2026-01-05 21:47:37.683 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:47:37 compute-0 nova_compute[186018]: 2026-01-05 21:47:37.684 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance fe15eddf-ceea-4584-95df-dc1ea54e3c25 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:47:37 compute-0 nova_compute[186018]: 2026-01-05 21:47:37.684 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 66b489b4-d427-4eb3-b712-aa91b1410874 actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:47:37 compute-0 nova_compute[186018]: 2026-01-05 21:47:37.684 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 3 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:47:37 compute-0 nova_compute[186018]: 2026-01-05 21:47:37.685 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=896MB phys_disk=79GB used_disk=3GB total_vcpus=8 used_vcpus=3 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:47:37 compute-0 nova_compute[186018]: 2026-01-05 21:47:37.917 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:47:37 compute-0 nova_compute[186018]: 2026-01-05 21:47:37.932 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:47:37 compute-0 nova_compute[186018]: 2026-01-05 21:47:37.934 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:47:37 compute-0 nova_compute[186018]: 2026-01-05 21:47:37.935 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.548s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:47:38 compute-0 nova_compute[186018]: 2026-01-05 21:47:38.423 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:39 compute-0 nova_compute[186018]: 2026-01-05 21:47:39.931 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:47:41 compute-0 nova_compute[186018]: 2026-01-05 21:47:41.409 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:41 compute-0 nova_compute[186018]: 2026-01-05 21:47:41.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:47:41 compute-0 nova_compute[186018]: 2026-01-05 21:47:41.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:47:41 compute-0 nova_compute[186018]: 2026-01-05 21:47:41.860 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-66b489b4-d427-4eb3-b712-aa91b1410874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:47:41 compute-0 nova_compute[186018]: 2026-01-05 21:47:41.861 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-66b489b4-d427-4eb3-b712-aa91b1410874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:47:41 compute-0 nova_compute[186018]: 2026-01-05 21:47:41.861 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:47:42.889 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:47:42.891 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:47:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:47:42.892 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:47:43 compute-0 nova_compute[186018]: 2026-01-05 21:47:43.011 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Updating instance_info_cache with network_info: [{"id": "76d8404e-3237-44da-934d-3e7e8792c114", "address": "fa:16:3e:58:ee:ae", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d8404e-32", "ovs_interfaceid": "76d8404e-3237-44da-934d-3e7e8792c114", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:47:43 compute-0 nova_compute[186018]: 2026-01-05 21:47:43.033 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-66b489b4-d427-4eb3-b712-aa91b1410874" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:47:43 compute-0 nova_compute[186018]: 2026-01-05 21:47:43.034 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:47:43 compute-0 nova_compute[186018]: 2026-01-05 21:47:43.035 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:47:43 compute-0 nova_compute[186018]: 2026-01-05 21:47:43.425 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:44 compute-0 podman[258578]: 2026-01-05 21:47:44.76580104 +0000 UTC m=+0.107256666 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9-minimal, version=9.6, build-date=2025-08-20T13:12:41, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, io.buildah.version=1.33.7, url=https://catalog.redhat.com/en/search?searchType=containers, vcs-type=git, maintainer=Red Hat, Inc., summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., vendor=Red Hat, Inc., architecture=x86_64, config_id=openstack_network_exporter, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, release=1755695350, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, com.redhat.component=ubi9-minimal-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, io.openshift.expose-services=, managed_by=edpm_ansible)
Jan 05 21:47:44 compute-0 podman[258577]: 2026-01-05 21:47:44.77367587 +0000 UTC m=+0.130841445 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, managed_by=edpm_ansible, org.label-schema.build-date=20251202, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_id=ovn_controller, container_name=ovn_controller)
Jan 05 21:47:45 compute-0 nova_compute[186018]: 2026-01-05 21:47:45.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:47:46 compute-0 nova_compute[186018]: 2026-01-05 21:47:46.412 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:47 compute-0 nova_compute[186018]: 2026-01-05 21:47:47.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:47:48 compute-0 nova_compute[186018]: 2026-01-05 21:47:48.427 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:49 compute-0 nova_compute[186018]: 2026-01-05 21:47:49.462 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:47:49 compute-0 podman[258621]: 2026-01-05 21:47:49.737049429 +0000 UTC m=+0.074782067 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:47:49 compute-0 podman[258620]: 2026-01-05 21:47:49.766450078 +0000 UTC m=+0.097306955 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, container_name=ovn_metadata_agent, org.label-schema.license=GPLv2, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, config_id=ovn_metadata_agent)
Jan 05 21:47:50 compute-0 nova_compute[186018]: 2026-01-05 21:47:50.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:47:51 compute-0 nova_compute[186018]: 2026-01-05 21:47:51.416 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:53 compute-0 nova_compute[186018]: 2026-01-05 21:47:53.430 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:56 compute-0 nova_compute[186018]: 2026-01-05 21:47:56.419 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:56 compute-0 podman[258665]: 2026-01-05 21:47:56.802474994 +0000 UTC m=+0.136048698 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:47:58 compute-0 nova_compute[186018]: 2026-01-05 21:47:58.434 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:47:59 compute-0 podman[202426]: time="2026-01-05T21:47:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:47:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:47:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 29740 "" "Go-http-client/1.1"
Jan 05 21:47:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:47:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4835 "" "Go-http-client/1.1"
Jan 05 21:48:01 compute-0 openstack_network_exporter[205720]: ERROR   21:48:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:48:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:48:01 compute-0 openstack_network_exporter[205720]: ERROR   21:48:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:48:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:48:01 compute-0 nova_compute[186018]: 2026-01-05 21:48:01.424 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:03 compute-0 nova_compute[186018]: 2026-01-05 21:48:03.435 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:03 compute-0 podman[258689]: 2026-01-05 21:48:03.74268995 +0000 UTC m=+0.085265614 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vcs-type=git, build-date=2024-09-18T21:23:30, release=1214.1726694543, config_id=kepler, managed_by=edpm_ansible, release-0.7.12=, summary=Provides the latest release of Red Hat Universal Base Image 9., com.redhat.component=ubi9-container, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, io.buildah.version=1.29.0, io.openshift.expose-services=, maintainer=Red Hat, Inc., name=ubi9, distribution-scope=public, io.openshift.tags=base rhel9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.k8s.display-name=Red Hat Universal Base Image 9, container_name=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., vendor=Red Hat, Inc., version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, architecture=x86_64)
Jan 05 21:48:03 compute-0 podman[258690]: 2026-01-05 21:48:03.745390719 +0000 UTC m=+0.076037124 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, config_id=ceilometer_agent_ipmi, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ceilometer_agent_ipmi, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, org.label-schema.schema-version=1.0)
Jan 05 21:48:06 compute-0 nova_compute[186018]: 2026-01-05 21:48:06.426 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:06 compute-0 nova_compute[186018]: 2026-01-05 21:48:06.963 186022 DEBUG oslo_concurrency.lockutils [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:48:06 compute-0 nova_compute[186018]: 2026-01-05 21:48:06.964 186022 DEBUG oslo_concurrency.lockutils [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:48:06 compute-0 nova_compute[186018]: 2026-01-05 21:48:06.964 186022 DEBUG oslo_concurrency.lockutils [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:48:06 compute-0 nova_compute[186018]: 2026-01-05 21:48:06.965 186022 DEBUG oslo_concurrency.lockutils [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:48:06 compute-0 nova_compute[186018]: 2026-01-05 21:48:06.965 186022 DEBUG oslo_concurrency.lockutils [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:48:06 compute-0 nova_compute[186018]: 2026-01-05 21:48:06.966 186022 INFO nova.compute.manager [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Terminating instance
Jan 05 21:48:06 compute-0 nova_compute[186018]: 2026-01-05 21:48:06.967 186022 DEBUG nova.compute.manager [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 21:48:07 compute-0 kernel: tapd05ce4e7-0f (unregistering): left promiscuous mode
Jan 05 21:48:07 compute-0 NetworkManager[56598]: <info>  [1767649687.0205] device (tapd05ce4e7-0f): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.024 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:07 compute-0 ovn_controller[98229]: 2026-01-05T21:48:07Z|00175|binding|INFO|Releasing lport d05ce4e7-0fd8-4cf1-8711-f2a049118a41 from this chassis (sb_readonly=0)
Jan 05 21:48:07 compute-0 ovn_controller[98229]: 2026-01-05T21:48:07Z|00176|binding|INFO|Setting lport d05ce4e7-0fd8-4cf1-8711-f2a049118a41 down in Southbound
Jan 05 21:48:07 compute-0 ovn_controller[98229]: 2026-01-05T21:48:07Z|00177|binding|INFO|Removing iface tapd05ce4e7-0f ovn-installed in OVS
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.031 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:07 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:07.034 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:f6:00:12 10.100.0.203'], port_security=['fa:16:3e:f6:00:12 10.100.0.203'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.0.203/16', 'neutron:device_id': 'fe15eddf-ceea-4584-95df-dc1ea54e3c25', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d77496083304392a3bddf3b3cc09d6f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6045589-62d6-4436-a4e5-3eada182f76e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5730d3f-9ce0-49ab-a945-1714805ce7f9, chassis=[], tunnel_key=2, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=d05ce4e7-0fd8-4cf1-8711-f2a049118a41) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:48:07 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:07.035 107689 INFO neutron.agent.ovn.metadata.agent [-] Port d05ce4e7-0fd8-4cf1-8711-f2a049118a41 in datapath cfd3046a-c974-4a8e-be8e-0c5c965904ab unbound from our chassis
Jan 05 21:48:07 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:07.037 107689 INFO neutron.agent.ovn.metadata.agent [-] Provisioning metadata for network cfd3046a-c974-4a8e-be8e-0c5c965904ab
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.039 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:07 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:07.061 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[e7259b92-1539-4355-af81-3bb609e135d4]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:48:07 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Deactivated successfully.
Jan 05 21:48:07 compute-0 systemd[1]: machine-qemu\x2d11\x2dinstance\x2d0000000b.scope: Consumed 7min 18.275s CPU time.
Jan 05 21:48:07 compute-0 systemd-machined[157312]: Machine qemu-11-instance-0000000b terminated.
Jan 05 21:48:07 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:07.099 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[44749054-3be9-47b6-ac87-37b1b0ba6ca9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:48:07 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:07.103 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[d61ff3d9-410d-403e-879e-ec3845fa94c7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:48:07 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:07.140 240510 DEBUG oslo.privsep.daemon [-] privsep: reply[64ebcd62-d690-4b0d-b0ab-67d0dde0e4dc]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:48:07 compute-0 podman[258729]: 2026-01-05 21:48:07.146783777 +0000 UTC m=+0.102480656 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.schema-version=1.0, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, managed_by=edpm_ansible, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS, tcib_managed=true, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image)
Jan 05 21:48:07 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:07.162 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[db775ad2-5bb9-4d2a-a7f3-9e7360384ed5]: (4, [{'family': 0, '__align': (), 'ifi_type': 1, 'index': 2, 'flags': 69699, 'change': 0, 'attrs': [['IFLA_IFNAME', 'tapcfd3046a-c1'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UP'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 1500], ['IFLA_MIN_MTU', 68], ['IFLA_MAX_MTU', 65535], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 8], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 8], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 2], ['IFLA_CARRIER_UP_COUNT', 1], ['IFLA_CARRIER_DOWN_COUNT', 1], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', 'fa:16:3e:9f:25:7c'], ['IFLA_BROADCAST', 'ff:ff:ff:ff:ff:ff'], ['IFLA_STATS64', {'rx_packets': 40, 'tx_packets': 11, 'rx_bytes': 1960, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 40, 'tx_packets': 11, 'rx_bytes': 1960, 'tx_bytes': 606, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_LINKINFO', {'attrs': [['IFLA_INFO_KIND', 'veth']]}], ['IFLA_LINK_NETNSID', 0], ['IFLA_LINK', 33], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 0, 'nopolicy': 0, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556128, 'reachable_time': 26717, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 1500, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 0, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 1, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 6, 'inoctets': 448, 'indelivers': 1, 'outforwdatagrams': 0, 'outpkts': 3, 'outoctets': 228, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 6, 'outmcastpkts': 3, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 448, 'outmcastoctets': 228, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 6, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 1, 'inerrors': 0, 'outmsgs': 3, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1448, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258761, 'error': None, 'target': 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:48:07 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:07.183 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[a190ed91-66fc-42f7-b02d-b6b72d2ab44a]: (4, ({'family': 2, 'prefixlen': 32, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '169.254.169.254'], ['IFA_LOCAL', '169.254.169.254'], ['IFA_BROADCAST', '169.254.169.254'], ['IFA_LABEL', 'tapcfd3046a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 556145, 'tstamp': 556145}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258762, 'error': None, 'target': 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'}, {'family': 2, 'prefixlen': 16, 'flags': 128, 'scope': 0, 'index': 2, 'attrs': [['IFA_ADDRESS', '10.100.0.2'], ['IFA_LOCAL', '10.100.0.2'], ['IFA_BROADCAST', '10.100.255.255'], ['IFA_LABEL', 'tapcfd3046a-c1'], ['IFA_FLAGS', 128], ['IFA_CACHEINFO', {'ifa_preferred': 4294967295, 'ifa_valid': 4294967295, 'cstamp': 556148, 'tstamp': 556148}]], 'header': {'length': 96, 'type': 20, 'flags': 2, 'sequence_number': 255, 'pid': 258762, 'error': None, 'target': 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'stats': (0, 0, 0)}, 'event': 'RTM_NEWADDR'})) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:48:07 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:07.184 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfd3046a-c0, bridge=br-ex, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.186 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.194 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:07 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:07.194 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): AddPortCommand(_result=None, bridge=br-int, port=tapcfd3046a-c0, may_exist=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:48:07 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:07.194 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:48:07 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:07.195 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Interface, record=tapcfd3046a-c0, col_values=(('external_ids', {'iface-id': '68b7e7cf-3a36-4106-85be-cc39d85ff653'}),)) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:48:07 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:07.195 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Transaction caused no change do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:129
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.235 186022 INFO nova.virt.libvirt.driver [-] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Instance destroyed successfully.
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.236 186022 DEBUG nova.objects.instance [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lazy-loading 'resources' on Instance uuid fe15eddf-ceea-4584-95df-dc1ea54e3c25 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.248 186022 DEBUG nova.virt.libvirt.vif [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:33:32Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6530778-asg-yb4g67iwlud7-ckgv372t4iqg-aqavlylhhpiy',id=11,image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-05T21:33:41Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='592ac083-4e5e-4ede-94dc-941b228764d4'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d77496083304392a3bddf3b3cc09d6f',ramdisk_id='',reservation_id='r-n5lr03o8',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1091853177',owner_user_name='tempest-PrometheusGabbiTest-1091853177-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T21:33:42Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='4adc8921daaf44d4b88d43bd5764da44',uuid=fe15eddf-ceea-4584-95df-dc1ea54e3c25,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "address": "fa:16:3e:f6:00:12", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd05ce4e7-0f", "ovs_interfaceid": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.248 186022 DEBUG nova.network.os_vif_util [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converting VIF {"id": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "address": "fa:16:3e:f6:00:12", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.203", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapd05ce4e7-0f", "ovs_interfaceid": "d05ce4e7-0fd8-4cf1-8711-f2a049118a41", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.249 186022 DEBUG nova.network.os_vif_util [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:f6:00:12,bridge_name='br-int',has_traffic_filtering=True,id=d05ce4e7-0fd8-4cf1-8711-f2a049118a41,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd05ce4e7-0f') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.250 186022 DEBUG os_vif [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:00:12,bridge_name='br-int',has_traffic_filtering=True,id=d05ce4e7-0fd8-4cf1-8711-f2a049118a41,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd05ce4e7-0f') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.251 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.251 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapd05ce4e7-0f, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.254 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.255 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.258 186022 INFO os_vif [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:f6:00:12,bridge_name='br-int',has_traffic_filtering=True,id=d05ce4e7-0fd8-4cf1-8711-f2a049118a41,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tapd05ce4e7-0f')
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.259 186022 INFO nova.virt.libvirt.driver [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Deleting instance files /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25_del
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.260 186022 INFO nova.virt.libvirt.driver [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Deletion of /var/lib/nova/instances/fe15eddf-ceea-4584-95df-dc1ea54e3c25_del complete
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.309 186022 INFO nova.compute.manager [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Took 0.34 seconds to destroy the instance on the hypervisor.
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.309 186022 DEBUG oslo.service.loopingcall [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.310 186022 DEBUG nova.compute.manager [-] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 21:48:07 compute-0 nova_compute[186018]: 2026-01-05 21:48:07.310 186022 DEBUG nova.network.neutron [-] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 21:48:08 compute-0 nova_compute[186018]: 2026-01-05 21:48:08.438 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:09 compute-0 nova_compute[186018]: 2026-01-05 21:48:09.397 186022 DEBUG nova.compute.manager [req-91f6ea5c-2786-45f9-a2f2-fcb43d76e55b req-dea92c0c-7bf7-486a-a126-88e53ba5c948 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Received event network-vif-unplugged-d05ce4e7-0fd8-4cf1-8711-f2a049118a41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:48:09 compute-0 nova_compute[186018]: 2026-01-05 21:48:09.398 186022 DEBUG oslo_concurrency.lockutils [req-91f6ea5c-2786-45f9-a2f2-fcb43d76e55b req-dea92c0c-7bf7-486a-a126-88e53ba5c948 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:48:09 compute-0 nova_compute[186018]: 2026-01-05 21:48:09.398 186022 DEBUG oslo_concurrency.lockutils [req-91f6ea5c-2786-45f9-a2f2-fcb43d76e55b req-dea92c0c-7bf7-486a-a126-88e53ba5c948 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:48:09 compute-0 nova_compute[186018]: 2026-01-05 21:48:09.398 186022 DEBUG oslo_concurrency.lockutils [req-91f6ea5c-2786-45f9-a2f2-fcb43d76e55b req-dea92c0c-7bf7-486a-a126-88e53ba5c948 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:48:09 compute-0 nova_compute[186018]: 2026-01-05 21:48:09.398 186022 DEBUG nova.compute.manager [req-91f6ea5c-2786-45f9-a2f2-fcb43d76e55b req-dea92c0c-7bf7-486a-a126-88e53ba5c948 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] No waiting events found dispatching network-vif-unplugged-d05ce4e7-0fd8-4cf1-8711-f2a049118a41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:48:09 compute-0 nova_compute[186018]: 2026-01-05 21:48:09.399 186022 DEBUG nova.compute.manager [req-91f6ea5c-2786-45f9-a2f2-fcb43d76e55b req-dea92c0c-7bf7-486a-a126-88e53ba5c948 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Received event network-vif-unplugged-d05ce4e7-0fd8-4cf1-8711-f2a049118a41 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 05 21:48:09 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:09.433 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: SbGlobalUpdateEvent(events=('update',), table='SB_Global', conditions=None, old_conditions=None), priority=20 to row=SB_Global(external_ids={}, nb_cfg=19, options={'arp_ns_explicit_output': 'true', 'mac_prefix': 'fa:ee:20', 'max_tunid': '16711680', 'northd_internal_version': '24.03.8-20.33.0-76.8', 'svc_monitor_mac': '3a:de:60:8e:c9:49'}, ipsec=False) old=SB_Global(nb_cfg=18) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:48:09 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:09.435 107689 DEBUG neutron.agent.ovn.metadata.agent [-] Delaying updating chassis table for 9 seconds run /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:274
Jan 05 21:48:09 compute-0 nova_compute[186018]: 2026-01-05 21:48:09.436 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:09 compute-0 nova_compute[186018]: 2026-01-05 21:48:09.658 186022 DEBUG nova.network.neutron [-] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:48:09 compute-0 nova_compute[186018]: 2026-01-05 21:48:09.683 186022 INFO nova.compute.manager [-] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Took 2.37 seconds to deallocate network for instance.
Jan 05 21:48:09 compute-0 nova_compute[186018]: 2026-01-05 21:48:09.742 186022 DEBUG oslo_concurrency.lockutils [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:48:09 compute-0 nova_compute[186018]: 2026-01-05 21:48:09.743 186022 DEBUG oslo_concurrency.lockutils [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:48:09 compute-0 nova_compute[186018]: 2026-01-05 21:48:09.763 186022 DEBUG nova.compute.manager [req-69c6deeb-2872-4ff9-8358-5608289b2ec9 req-8ca81e2f-6156-4263-b9c9-387b7108cf15 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Received event network-vif-deleted-d05ce4e7-0fd8-4cf1-8711-f2a049118a41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:48:09 compute-0 nova_compute[186018]: 2026-01-05 21:48:09.861 186022 DEBUG nova.compute.provider_tree [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:48:09 compute-0 nova_compute[186018]: 2026-01-05 21:48:09.878 186022 DEBUG nova.scheduler.client.report [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:48:09 compute-0 nova_compute[186018]: 2026-01-05 21:48:09.913 186022 DEBUG oslo_concurrency.lockutils [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.171s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:48:09 compute-0 nova_compute[186018]: 2026-01-05 21:48:09.961 186022 INFO nova.scheduler.client.report [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Deleted allocations for instance fe15eddf-ceea-4584-95df-dc1ea54e3c25
Jan 05 21:48:10 compute-0 nova_compute[186018]: 2026-01-05 21:48:10.025 186022 DEBUG oslo_concurrency.lockutils [None req-ef8e7fc0-382a-4a9c-9d68-7a9a6a45d830 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 3.061s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:48:11 compute-0 nova_compute[186018]: 2026-01-05 21:48:11.494 186022 DEBUG nova.compute.manager [req-26a783ca-6bfd-410b-8ea6-b98821e57015 req-3f9f62b1-25a1-4774-9b58-4716ff9c09d8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Received event network-vif-plugged-d05ce4e7-0fd8-4cf1-8711-f2a049118a41 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:48:11 compute-0 nova_compute[186018]: 2026-01-05 21:48:11.494 186022 DEBUG oslo_concurrency.lockutils [req-26a783ca-6bfd-410b-8ea6-b98821e57015 req-3f9f62b1-25a1-4774-9b58-4716ff9c09d8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:48:11 compute-0 nova_compute[186018]: 2026-01-05 21:48:11.496 186022 DEBUG oslo_concurrency.lockutils [req-26a783ca-6bfd-410b-8ea6-b98821e57015 req-3f9f62b1-25a1-4774-9b58-4716ff9c09d8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.002s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:48:11 compute-0 nova_compute[186018]: 2026-01-05 21:48:11.497 186022 DEBUG oslo_concurrency.lockutils [req-26a783ca-6bfd-410b-8ea6-b98821e57015 req-3f9f62b1-25a1-4774-9b58-4716ff9c09d8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "fe15eddf-ceea-4584-95df-dc1ea54e3c25-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:48:11 compute-0 nova_compute[186018]: 2026-01-05 21:48:11.497 186022 DEBUG nova.compute.manager [req-26a783ca-6bfd-410b-8ea6-b98821e57015 req-3f9f62b1-25a1-4774-9b58-4716ff9c09d8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] No waiting events found dispatching network-vif-plugged-d05ce4e7-0fd8-4cf1-8711-f2a049118a41 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:48:11 compute-0 nova_compute[186018]: 2026-01-05 21:48:11.497 186022 WARNING nova.compute.manager [req-26a783ca-6bfd-410b-8ea6-b98821e57015 req-3f9f62b1-25a1-4774-9b58-4716ff9c09d8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Received unexpected event network-vif-plugged-d05ce4e7-0fd8-4cf1-8711-f2a049118a41 for instance with vm_state deleted and task_state None.
Jan 05 21:48:12 compute-0 nova_compute[186018]: 2026-01-05 21:48:12.254 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:13 compute-0 nova_compute[186018]: 2026-01-05 21:48:13.442 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:15 compute-0 podman[258781]: 2026-01-05 21:48:15.777566032 +0000 UTC m=+0.108755390 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, maintainer=Red Hat, Inc., architecture=x86_64, com.redhat.component=ubi9-minimal-container, release=1755695350, vcs-type=git, version=9.6, name=ubi9-minimal, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.openshift.tags=minimal rhel9, build-date=2025-08-20T13:12:41, managed_by=edpm_ansible, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., url=https://catalog.redhat.com/en/search?searchType=containers, config_id=openstack_network_exporter, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, vendor=Red Hat, Inc., io.openshift.expose-services=, io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, container_name=openstack_network_exporter, distribution-scope=public, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9.)
Jan 05 21:48:15 compute-0 podman[258780]: 2026-01-05 21:48:15.81752922 +0000 UTC m=+0.151349315 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_controller, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, config_id=ovn_controller, io.buildah.version=1.41.3)
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.387 186022 DEBUG oslo_concurrency.lockutils [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "66b489b4-d427-4eb3-b712-aa91b1410874" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.389 186022 DEBUG oslo_concurrency.lockutils [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "66b489b4-d427-4eb3-b712-aa91b1410874" acquired by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.390 186022 DEBUG oslo_concurrency.lockutils [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.390 186022 DEBUG oslo_concurrency.lockutils [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" acquired by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.391 186022 DEBUG oslo_concurrency.lockutils [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" "released" by "nova.compute.manager.InstanceEvents.clear_events_for_instance.<locals>._clear_events" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.393 186022 INFO nova.compute.manager [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Terminating instance
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.396 186022 DEBUG nova.compute.manager [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Start destroying the instance on the hypervisor. _shutdown_instance /usr/lib/python3.9/site-packages/nova/compute/manager.py:3120
Jan 05 21:48:16 compute-0 kernel: tap76d8404e-32 (unregistering): left promiscuous mode
Jan 05 21:48:16 compute-0 NetworkManager[56598]: <info>  [1767649696.4356] device (tap76d8404e-32): state change: disconnected -> unmanaged (reason 'unmanaged', managed-type: 'removed')
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.456 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:16 compute-0 ovn_controller[98229]: 2026-01-05T21:48:16Z|00178|binding|INFO|Releasing lport 76d8404e-3237-44da-934d-3e7e8792c114 from this chassis (sb_readonly=0)
Jan 05 21:48:16 compute-0 ovn_controller[98229]: 2026-01-05T21:48:16Z|00179|binding|INFO|Setting lport 76d8404e-3237-44da-934d-3e7e8792c114 down in Southbound
Jan 05 21:48:16 compute-0 ovn_controller[98229]: 2026-01-05T21:48:16Z|00180|binding|INFO|Removing iface tap76d8404e-32 ovn-installed in OVS
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.464 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:16.465 107689 DEBUG ovsdbapp.backend.ovs_idl.event [-] Matched UPDATE: PortBindingUpdatedEvent(events=('update',), table='Port_Binding', conditions=None, old_conditions=None), priority=20 to row=Port_Binding(mac=['fa:16:3e:58:ee:ae 10.100.2.244'], port_security=['fa:16:3e:58:ee:ae 10.100.2.244'], type=, nat_addresses=[], virtual_parent=[], up=[False], options={'requested-chassis': 'compute-0.ctlplane.example.com'}, parent_port=[], requested_additional_chassis=[], ha_chassis_group=[], external_ids={'neutron:cidrs': '10.100.2.244/16', 'neutron:device_id': '66b489b4-d427-4eb3-b712-aa91b1410874', 'neutron:device_owner': 'compute:nova', 'neutron:mtu': '', 'neutron:network_name': 'neutron-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'neutron:port_capabilities': '', 'neutron:port_name': '', 'neutron:project_id': '0d77496083304392a3bddf3b3cc09d6f', 'neutron:revision_number': '4', 'neutron:security_group_ids': 'e6045589-62d6-4436-a4e5-3eada182f76e', 'neutron:subnet_pool_addr_scope4': '', 'neutron:subnet_pool_addr_scope6': '', 'neutron:vnic_type': 'normal', 'neutron:host_id': 'compute-0.ctlplane.example.com'}, additional_chassis=[], tag=[], additional_encap=[], encap=[], mirror_rules=[], datapath=f5730d3f-9ce0-49ab-a945-1714805ce7f9, chassis=[], tunnel_key=4, gateway_chassis=[], requested_chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>], logical_port=76d8404e-3237-44da-934d-3e7e8792c114) old=Port_Binding(up=[True], chassis=[<ovs.db.idl.Row object at 0x7f731da8af10>]) matches /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/event.py:43
Jan 05 21:48:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:16.466 107689 INFO neutron.agent.ovn.metadata.agent [-] Port 76d8404e-3237-44da-934d-3e7e8792c114 in datapath cfd3046a-c974-4a8e-be8e-0c5c965904ab unbound from our chassis
Jan 05 21:48:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:16.467 107689 DEBUG neutron.agent.ovn.metadata.agent [-] No valid VIF ports were found for network cfd3046a-c974-4a8e-be8e-0c5c965904ab, tearing the namespace down if needed _get_provision_params /usr/lib/python3.9/site-packages/neutron/agent/ovn/metadata/agent.py:628
Jan 05 21:48:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:16.469 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[30bd9816-0f31-4b0a-b0de-5e7dc962f158]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:48:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:16.469 107689 INFO neutron.agent.ovn.metadata.agent [-] Cleaning up ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab namespace which is not needed anymore
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.478 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:16 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Deactivated successfully.
Jan 05 21:48:16 compute-0 systemd[1]: machine-qemu\x2d15\x2dinstance\x2d0000000e.scope: Consumed 6min 41.817s CPU time.
Jan 05 21:48:16 compute-0 systemd-machined[157312]: Machine qemu-15-instance-0000000e terminated.
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.642 186022 DEBUG nova.compute.manager [req-afc4cdab-7fc5-4243-9d84-f868318a6e91 req-e14db3e7-24ae-486d-a10f-88f6b4a78fa8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Received event network-vif-unplugged-76d8404e-3237-44da-934d-3e7e8792c114 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.646 186022 DEBUG oslo_concurrency.lockutils [req-afc4cdab-7fc5-4243-9d84-f868318a6e91 req-e14db3e7-24ae-486d-a10f-88f6b4a78fa8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.646 186022 DEBUG oslo_concurrency.lockutils [req-afc4cdab-7fc5-4243-9d84-f868318a6e91 req-e14db3e7-24ae-486d-a10f-88f6b4a78fa8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.647 186022 DEBUG oslo_concurrency.lockutils [req-afc4cdab-7fc5-4243-9d84-f868318a6e91 req-e14db3e7-24ae-486d-a10f-88f6b4a78fa8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.647 186022 DEBUG nova.compute.manager [req-afc4cdab-7fc5-4243-9d84-f868318a6e91 req-e14db3e7-24ae-486d-a10f-88f6b4a78fa8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] No waiting events found dispatching network-vif-unplugged-76d8404e-3237-44da-934d-3e7e8792c114 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.647 186022 DEBUG nova.compute.manager [req-afc4cdab-7fc5-4243-9d84-f868318a6e91 req-e14db3e7-24ae-486d-a10f-88f6b4a78fa8 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Received event network-vif-unplugged-76d8404e-3237-44da-934d-3e7e8792c114 for instance with task_state deleting. _process_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:10826
Jan 05 21:48:16 compute-0 neutron-haproxy-ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab[252905]: [NOTICE]   (252909) : haproxy version is 2.8.14-c23fe91
Jan 05 21:48:16 compute-0 neutron-haproxy-ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab[252905]: [NOTICE]   (252909) : path to executable is /usr/sbin/haproxy
Jan 05 21:48:16 compute-0 neutron-haproxy-ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab[252905]: [WARNING]  (252909) : Exiting Master process...
Jan 05 21:48:16 compute-0 neutron-haproxy-ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab[252905]: [WARNING]  (252909) : Exiting Master process...
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.686 186022 INFO nova.virt.libvirt.driver [-] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Instance destroyed successfully.
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.686 186022 DEBUG nova.objects.instance [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lazy-loading 'resources' on Instance uuid 66b489b4-d427-4eb3-b712-aa91b1410874 obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:48:16 compute-0 neutron-haproxy-ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab[252905]: [ALERT]    (252909) : Current worker (252911) exited with code 143 (Terminated)
Jan 05 21:48:16 compute-0 neutron-haproxy-ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab[252905]: [WARNING]  (252909) : All workers exited. Exiting... (0)
Jan 05 21:48:16 compute-0 systemd[1]: libpod-9b9431a5469e14933ea3c179cc32548e931ad8d2d6c5bdc9dbde22e0668a945e.scope: Deactivated successfully.
Jan 05 21:48:16 compute-0 podman[258853]: 2026-01-05 21:48:16.696405118 +0000 UTC m=+0.076875978 container died 9b9431a5469e14933ea3c179cc32548e931ad8d2d6c5bdc9dbde22e0668a945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, io.buildah.version=1.41.3, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image)
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.706 186022 DEBUG nova.virt.libvirt.vif [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] vif_type=ovs instance=Instance(access_ip_v4=None,access_ip_v6=None,architecture=None,auto_disk_config=False,availability_zone='nova',cell_name=None,cleaned=False,config_drive='True',created_at=2026-01-05T21:38:10Z,default_ephemeral_device=None,default_swap_device=None,deleted=False,deleted_at=None,device_metadata=<?>,disable_terminate=False,display_description=None,display_name='te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut',ec2_ids=<?>,ephemeral_gb=0,ephemeral_key_uuid=None,fault=<?>,flavor=Flavor(3),hidden=False,host='compute-0.ctlplane.example.com',hostname='te-6530778-asg-yb4g67iwlud7-6edchnla5huu-gomw4qzu42ut',id=14,image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',info_cache=InstanceInfoCache,instance_type_id=3,kernel_id='',key_data=None,key_name=None,keypairs=<?>,launch_index=0,launched_at=2026-01-05T21:38:18Z,launched_on='compute-0.ctlplane.example.com',locked=False,locked_by=None,memory_mb=128,metadata={metering.server_group='592ac083-4e5e-4ede-94dc-941b228764d4'},migration_context=<?>,new_flavor=None,node='compute-0.ctlplane.example.com',numa_topology=None,old_flavor=None,os_type=None,pci_devices=<?>,pci_requests=<?>,power_state=1,progress=0,project_id='0d77496083304392a3bddf3b3cc09d6f',ramdisk_id='',reservation_id='r-130i0h19',resources=None,root_device_name='/dev/vda',root_gb=1,security_groups=SecurityGroupList,services=<?>,shutdown_terminate=False,system_metadata={boot_roles='member,reader',image_base_image_ref='be6cfe06-61ed-4c76-8e1d-bc9df6929005',image_container_format='bare',image_disk_format='qcow2',image_hw_cdrom_bus='sata',image_hw_disk_bus='virtio',image_hw_input_bus='usb',image_hw_machine_type='q35',image_hw_pointer_model='usbtablet',image_hw_video_model='virtio',image_hw_vif_model='virtio',image_min_disk='1',image_min_ram='0',owner_project_name='tempest-PrometheusGabbiTest-1091853177',owner_user_name='tempest-PrometheusGabbiTest-1091853177-project-member'},tags=<?>,task_state='deleting',terminated_at=None,trusted_certs=<?>,updated_at=2026-01-05T21:38:18Z,user_data='IyEvYmluL3NoCmVjaG8gJ0xvYWRpbmcgQ1BVJwpzZXQgLXYKY2F0IC9kZXYvdXJhbmRvbSA+IC9kZXYvbnVsbCAmIHNsZWVwIDMwMCA7IGtpbGwgJCEgCg==',user_id='4adc8921daaf44d4b88d43bd5764da44',uuid=66b489b4-d427-4eb3-b712-aa91b1410874,vcpu_model=<?>,vcpus=1,vm_mode=None,vm_state='active') vif={"id": "76d8404e-3237-44da-934d-3e7e8792c114", "address": "fa:16:3e:58:ee:ae", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d8404e-32", "ovs_interfaceid": "76d8404e-3237-44da-934d-3e7e8792c114", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} unplug /usr/lib/python3.9/site-packages/nova/virt/libvirt/vif.py:828
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.706 186022 DEBUG nova.network.os_vif_util [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converting VIF {"id": "76d8404e-3237-44da-934d-3e7e8792c114", "address": "fa:16:3e:58:ee:ae", "network": {"id": "cfd3046a-c974-4a8e-be8e-0c5c965904ab", "bridge": "br-int", "label": "", "subnets": [{"cidr": "10.100.0.0/16", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.2.244", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "0d77496083304392a3bddf3b3cc09d6f", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tap76d8404e-32", "ovs_interfaceid": "76d8404e-3237-44da-934d-3e7e8792c114", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}} nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:511
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.707 186022 DEBUG nova.network.os_vif_util [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Converted object VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:ae,bridge_name='br-int',has_traffic_filtering=True,id=76d8404e-3237-44da-934d-3e7e8792c114,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d8404e-32') nova_to_osvif_vif /usr/lib/python3.9/site-packages/nova/network/os_vif_util.py:548
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.708 186022 DEBUG os_vif [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Unplugging vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:ae,bridge_name='br-int',has_traffic_filtering=True,id=76d8404e-3237-44da-934d-3e7e8792c114,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d8404e-32') unplug /usr/lib/python3.9/site-packages/os_vif/__init__.py:109
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.710 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 20 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.710 186022 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tap76d8404e-32, bridge=br-int, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.712 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.714 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] 0-ms timeout __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:248
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.714 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.717 186022 INFO os_vif [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Successfully unplugged vif VIFOpenVSwitch(active=True,address=fa:16:3e:58:ee:ae,bridge_name='br-int',has_traffic_filtering=True,id=76d8404e-3237-44da-934d-3e7e8792c114,network=Network(cfd3046a-c974-4a8e-be8e-0c5c965904ab),plugin='ovs',port_profile=VIFPortProfileOpenVSwitch,preserve_on_delete=False,vif_name='tap76d8404e-32')
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.718 186022 INFO nova.virt.libvirt.driver [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Deleting instance files /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874_del
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.718 186022 INFO nova.virt.libvirt.driver [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Deletion of /var/lib/nova/instances/66b489b4-d427-4eb3-b712-aa91b1410874_del complete
Jan 05 21:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay\x2dcontainers-9b9431a5469e14933ea3c179cc32548e931ad8d2d6c5bdc9dbde22e0668a945e-userdata-shm.mount: Deactivated successfully.
Jan 05 21:48:16 compute-0 systemd[1]: var-lib-containers-storage-overlay-2750673adb764cef734147431fa120d99146f8cc04e7f186b5e132a3548e49ad-merged.mount: Deactivated successfully.
Jan 05 21:48:16 compute-0 podman[258853]: 2026-01-05 21:48:16.764637602 +0000 UTC m=+0.145108462 container cleanup 9b9431a5469e14933ea3c179cc32548e931ad8d2d6c5bdc9dbde22e0668a945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab, org.label-schema.license=GPLv2, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, org.label-schema.vendor=CentOS, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3)
Jan 05 21:48:16 compute-0 systemd[1]: libpod-conmon-9b9431a5469e14933ea3c179cc32548e931ad8d2d6c5bdc9dbde22e0668a945e.scope: Deactivated successfully.
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.795 186022 INFO nova.compute.manager [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Took 0.40 seconds to destroy the instance on the hypervisor.
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.797 186022 DEBUG oslo.service.loopingcall [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Waiting for function nova.compute.manager.ComputeManager._try_deallocate_network.<locals>._deallocate_network_with_retries to return. func /usr/lib/python3.9/site-packages/oslo_service/loopingcall.py:435
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.797 186022 DEBUG nova.compute.manager [-] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Deallocating network for instance _deallocate_network /usr/lib/python3.9/site-packages/nova/compute/manager.py:2259
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.798 186022 DEBUG nova.network.neutron [-] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] deallocate_for_instance() deallocate_for_instance /usr/lib/python3.9/site-packages/nova/network/neutron.py:1803
Jan 05 21:48:16 compute-0 podman[258898]: 2026-01-05 21:48:16.886611957 +0000 UTC m=+0.081008048 container remove 9b9431a5469e14933ea3c179cc32548e931ad8d2d6c5bdc9dbde22e0668a945e (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=neutron-haproxy-ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true)
Jan 05 21:48:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:16.901 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[9bbaa929-b92c-4353-8c2c-eba7842167a6]: (4, ('Mon Jan  5 09:48:16 PM UTC 2026 Stopping container neutron-haproxy-ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab (9b9431a5469e14933ea3c179cc32548e931ad8d2d6c5bdc9dbde22e0668a945e)\n9b9431a5469e14933ea3c179cc32548e931ad8d2d6c5bdc9dbde22e0668a945e\nMon Jan  5 09:48:16 PM UTC 2026 Deleting container neutron-haproxy-ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab (9b9431a5469e14933ea3c179cc32548e931ad8d2d6c5bdc9dbde22e0668a945e)\n9b9431a5469e14933ea3c179cc32548e931ad8d2d6c5bdc9dbde22e0668a945e\n', '', 0)) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:48:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:16.904 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[51060954-83f1-4367-9a18-b059bd0c23f7]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:48:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:16.907 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DelPortCommand(_result=None, port=tapcfd3046a-c0, bridge=None, if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.912 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:16 compute-0 kernel: tapcfd3046a-c0: left promiscuous mode
Jan 05 21:48:16 compute-0 nova_compute[186018]: 2026-01-05 21:48:16.930 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:16.933 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[630a760a-35fa-4f04-ad1c-01b2ceba4dd7]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:48:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:16.950 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[57128ad4-8285-48fe-950f-eef8866e98b9]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:48:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:16.952 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[85725992-7e2a-4079-b9df-10721896a8a5]: (4, True) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:48:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:16.973 240489 DEBUG oslo.privsep.daemon [-] privsep: reply[e8f3541a-34e8-426a-a29a-74e2f6e93219]: (4, [{'family': 0, '__align': (), 'ifi_type': 772, 'index': 1, 'flags': 65609, 'change': 0, 'attrs': [['IFLA_IFNAME', 'lo'], ['IFLA_TXQLEN', 1000], ['IFLA_OPERSTATE', 'UNKNOWN'], ['IFLA_LINKMODE', 0], ['IFLA_MTU', 65536], ['IFLA_MIN_MTU', 0], ['IFLA_MAX_MTU', 0], ['IFLA_GROUP', 0], ['IFLA_PROMISCUITY', 0], ['UNKNOWN', {'header': {'length': 8, 'type': 61}}], ['IFLA_NUM_TX_QUEUES', 1], ['IFLA_GSO_MAX_SEGS', 65535], ['IFLA_GSO_MAX_SIZE', 65536], ['IFLA_GRO_MAX_SIZE', 65536], ['UNKNOWN', {'header': {'length': 8, 'type': 63}}], ['UNKNOWN', {'header': {'length': 8, 'type': 64}}], ['IFLA_TSO_MAX_SIZE', 524280], ['IFLA_TSO_MAX_SEGS', 65535], ['UNKNOWN', {'header': {'length': 8, 'type': 66}}], ['IFLA_NUM_RX_QUEUES', 1], ['IFLA_CARRIER', 1], ['IFLA_CARRIER_CHANGES', 0], ['IFLA_CARRIER_UP_COUNT', 0], ['IFLA_CARRIER_DOWN_COUNT', 0], ['IFLA_PROTO_DOWN', 0], ['IFLA_ADDRESS', '00:00:00:00:00:00'], ['IFLA_BROADCAST', '00:00:00:00:00:00'], ['IFLA_STATS64', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_STATS', {'rx_packets': 1, 'tx_packets': 1, 'rx_bytes': 28, 'tx_bytes': 28, 'rx_errors': 0, 'tx_errors': 0, 'rx_dropped': 0, 'tx_dropped': 0, 'multicast': 0, 'collisions': 0, 'rx_length_errors': 0, 'rx_over_errors': 0, 'rx_crc_errors': 0, 'rx_frame_errors': 0, 'rx_fifo_errors': 0, 'rx_missed_errors': 0, 'tx_aborted_errors': 0, 'tx_carrier_errors': 0, 'tx_fifo_errors': 0, 'tx_heartbeat_errors': 0, 'tx_window_errors': 0, 'rx_compressed': 0, 'tx_compressed': 0}], ['IFLA_XDP', {'attrs': [['IFLA_XDP_ATTACHED', None]]}], ['IFLA_QDISC', 'noqueue'], ['IFLA_AF_SPEC', {'attrs': [['AF_INET', {'dummy': 65668, 'forwarding': 1, 'mc_forwarding': 0, 'proxy_arp': 0, 'accept_redirects': 0, 'secure_redirects': 0, 'send_redirects': 0, 'shared_media': 1, 'rp_filter': 1, 'accept_source_route': 0, 'bootp_relay': 0, 'log_martians': 0, 'tag': 0, 'arpfilter': 0, 'medium_id': 0, 'noxfrm': 1, 'nopolicy': 1, 'force_igmp_version': 0, 'arp_announce': 0, 'arp_ignore': 0, 'promote_secondaries': 1, 'arp_accept': 0, 'arp_notify': 0, 'accept_local': 0, 'src_vmark': 0, 'proxy_arp_pvlan': 0, 'route_localnet': 0, 'igmpv2_unsolicited_report_interval': 10000, 'igmpv3_unsolicited_report_interval': 1000}], ['AF_INET6', {'attrs': [['IFLA_INET6_FLAGS', 2147483648], ['IFLA_INET6_CACHEINFO', {'max_reasm_len': 65535, 'tstamp': 556119, 'reachable_time': 26167, 'retrans_time': 1000}], ['IFLA_INET6_CONF', {'forwarding': 0, 'hop_limit': 64, 'mtu': 65536, 'accept_ra': 1, 'accept_redirects': 1, 'autoconf': 1, 'dad_transmits': 1, 'router_solicitations': 4294967295, 'router_solicitation_interval': 4000, 'router_solicitation_delay': 1000, 'use_tempaddr': 4294967295, 'temp_valid_lft': 604800, 'temp_preferred_lft': 86400, 'regen_max_retry': 3, 'max_desync_factor': 600, 'max_addresses': 16, 'force_mld_version': 0, 'accept_ra_defrtr': 1, 'accept_ra_pinfo': 1, 'accept_ra_rtr_pref': 1, 'router_probe_interval': 60000, 'accept_ra_rt_info_max_plen': 0, 'proxy_ndp': 0, 'optimistic_dad': 0, 'accept_source_route': 0, 'mc_forwarding': 0, 'disable_ipv6': 0, 'accept_dad': 4294967295, 'force_tllao': 0, 'ndisc_notify': 0}], ['IFLA_INET6_STATS', {'num': 38, 'inpkts': 0, 'inoctets': 0, 'indelivers': 0, 'outforwdatagrams': 0, 'outpkts': 0, 'outoctets': 0, 'inhdrerrors': 0, 'intoobigerrors': 0, 'innoroutes': 0, 'inaddrerrors': 0, 'inunknownprotos': 0, 'intruncatedpkts': 0, 'indiscards': 0, 'outdiscards': 0, 'outnoroutes': 0, 'reasmtimeout': 0, 'reasmreqds': 0, 'reasmoks': 0, 'reasmfails': 0, 'fragoks': 0, 'fragfails': 0, 'fragcreates': 0, 'inmcastpkts': 0, 'outmcastpkts': 0, 'inbcastpkts': 0, 'outbcastpkts': 0, 'inmcastoctets': 0, 'outmcastoctets': 0, 'inbcastoctets': 0, 'outbcastoctets': 0, 'csumerrors': 0, 'noectpkts': 0, 'ect1pkts': 0, 'ect0pkts': 0, 'cepkts': 0}], ['IFLA_INET6_ICMP6STATS', {'num': 7, 'inmsgs': 0, 'inerrors': 0, 'outmsgs': 0, 'outerrors': 0, 'csumerrors': 0}], ['IFLA_INET6_TOKEN', '::'], ['IFLA_INET6_ADDR_GEN_MODE', 0]]}]]}], ['IFLA_MAP', {'mem_start': 0, 'mem_end': 0, 'base_addr': 0, 'irq': 0, 'dma': 0, 'port': 0}], ['UNKNOWN', {'header': {'length': 4, 'type': 32830}}], ['UNKNOWN', {'header': {'length': 4, 'type': 32833}}]], 'header': {'length': 1404, 'type': 16, 'flags': 2, 'sequence_number': 255, 'pid': 258911, 'error': None, 'target': 'ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab', 'stats': (0, 0, 0)}, 'state': 'up', 'event': 'RTM_NEWLINK'}]) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:48:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:16.978 108136 DEBUG neutron.privileged.agent.linux.ip_lib [-] Namespace ovnmeta-cfd3046a-c974-4a8e-be8e-0c5c965904ab deleted. remove_netns /usr/lib/python3.9/site-packages/neutron/privileged/agent/linux/ip_lib.py:607
Jan 05 21:48:16 compute-0 systemd[1]: run-netns-ovnmeta\x2dcfd3046a\x2dc974\x2d4a8e\x2dbe8e\x2d0c5c965904ab.mount: Deactivated successfully.
Jan 05 21:48:16 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:16.978 108136 DEBUG oslo.privsep.daemon [-] privsep: reply[e1a3c8b4-61b6-4159-bdc1-6d59ddf35900]: (4, None) _call_back /usr/lib/python3.9/site-packages/oslo_privsep/daemon.py:501
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.091 186022 DEBUG nova.network.neutron [-] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Updating instance_info_cache with network_info: [] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.107 186022 INFO nova.compute.manager [-] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Took 1.31 seconds to deallocate network for instance.
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.151 186022 DEBUG oslo_concurrency.lockutils [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.update_usage" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.152 186022 DEBUG oslo_concurrency.lockutils [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.236 186022 DEBUG nova.compute.provider_tree [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.257 186022 DEBUG nova.scheduler.client.report [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.285 186022 DEBUG oslo_concurrency.lockutils [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.update_usage" :: held 0.133s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.313 186022 INFO nova.scheduler.client.report [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Deleted allocations for instance 66b489b4-d427-4eb3-b712-aa91b1410874
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.394 186022 DEBUG oslo_concurrency.lockutils [None req-8e33bda1-1965-4651-a051-460164843c77 4adc8921daaf44d4b88d43bd5764da44 0d77496083304392a3bddf3b3cc09d6f - - default default] Lock "66b489b4-d427-4eb3-b712-aa91b1410874" "released" by "nova.compute.manager.ComputeManager.terminate_instance.<locals>.do_terminate_instance" :: held 2.005s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:48:18 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:18.439 107689 DEBUG ovsdbapp.backend.ovs_idl.transaction [-] Running txn n=1 command(idx=0): DbSetCommand(_result=None, table=Chassis_Private, record=d9598dc9-bc2d-4d46-a5e4-5e13afbc9e1b, col_values=(('external_ids', {'neutron:ovn-metadata-sb-cfg': '19'}),), if_exists=True) do_commit /usr/lib/python3.9/site-packages/ovsdbapp/backend/ovs_idl/transaction.py:89
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.446 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.769 186022 DEBUG nova.compute.manager [req-51f74fc8-57a2-4a18-9916-6405a643d0da req-9ca0fc7c-8606-4550-8b11-5b5a69eb9e17 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Received event network-vif-plugged-76d8404e-3237-44da-934d-3e7e8792c114 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.769 186022 DEBUG oslo_concurrency.lockutils [req-51f74fc8-57a2-4a18-9916-6405a643d0da req-9ca0fc7c-8606-4550-8b11-5b5a69eb9e17 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Acquiring lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.770 186022 DEBUG oslo_concurrency.lockutils [req-51f74fc8-57a2-4a18-9916-6405a643d0da req-9ca0fc7c-8606-4550-8b11-5b5a69eb9e17 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" acquired by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: waited 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.771 186022 DEBUG oslo_concurrency.lockutils [req-51f74fc8-57a2-4a18-9916-6405a643d0da req-9ca0fc7c-8606-4550-8b11-5b5a69eb9e17 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] Lock "66b489b4-d427-4eb3-b712-aa91b1410874-events" "released" by "nova.compute.manager.InstanceEvents.pop_instance_event.<locals>._pop_event" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.771 186022 DEBUG nova.compute.manager [req-51f74fc8-57a2-4a18-9916-6405a643d0da req-9ca0fc7c-8606-4550-8b11-5b5a69eb9e17 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] No waiting events found dispatching network-vif-plugged-76d8404e-3237-44da-934d-3e7e8792c114 pop_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:320
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.771 186022 WARNING nova.compute.manager [req-51f74fc8-57a2-4a18-9916-6405a643d0da req-9ca0fc7c-8606-4550-8b11-5b5a69eb9e17 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Received unexpected event network-vif-plugged-76d8404e-3237-44da-934d-3e7e8792c114 for instance with vm_state deleted and task_state None.
Jan 05 21:48:18 compute-0 nova_compute[186018]: 2026-01-05 21:48:18.772 186022 DEBUG nova.compute.manager [req-51f74fc8-57a2-4a18-9916-6405a643d0da req-9ca0fc7c-8606-4550-8b11-5b5a69eb9e17 6fb87d300fd645f5bbaf55c00b0bc85c dbc08493ae4a40da96eed31f652f0654 - - default default] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Received event network-vif-deleted-76d8404e-3237-44da-934d-3e7e8792c114 external_instance_event /usr/lib/python3.9/site-packages/nova/compute/manager.py:11048
Jan 05 21:48:20 compute-0 podman[258915]: 2026-01-05 21:48:20.783702492 +0000 UTC m=+0.115981471 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:48:20 compute-0 podman[258914]: 2026-01-05 21:48:20.796564068 +0000 UTC m=+0.131615558 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, org.label-schema.license=GPLv2, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']})
Jan 05 21:48:21 compute-0 nova_compute[186018]: 2026-01-05 21:48:21.713 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:22 compute-0 nova_compute[186018]: 2026-01-05 21:48:22.233 186022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767649687.2320397, fe15eddf-ceea-4584-95df-dc1ea54e3c25 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:48:22 compute-0 nova_compute[186018]: 2026-01-05 21:48:22.234 186022 INFO nova.compute.manager [-] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] VM Stopped (Lifecycle Event)
Jan 05 21:48:22 compute-0 nova_compute[186018]: 2026-01-05 21:48:22.263 186022 DEBUG nova.compute.manager [None req-d077c1a6-fd1b-4c47-a331-fcfe81578f27 - - - - - -] [instance: fe15eddf-ceea-4584-95df-dc1ea54e3c25] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:48:23 compute-0 nova_compute[186018]: 2026-01-05 21:48:23.450 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:26 compute-0 nova_compute[186018]: 2026-01-05 21:48:26.716 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:27 compute-0 podman[258953]: 2026-01-05 21:48:27.795754734 +0000 UTC m=+0.142028592 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter, container_name=node_exporter)
Jan 05 21:48:28 compute-0 nova_compute[186018]: 2026-01-05 21:48:28.454 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:29 compute-0 podman[202426]: time="2026-01-05T21:48:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:48:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:48:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:48:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:48:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4366 "" "Go-http-client/1.1"
Jan 05 21:48:30 compute-0 ovn_controller[98229]: 2026-01-05T21:48:30Z|00181|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:48:30 compute-0 nova_compute[186018]: 2026-01-05 21:48:30.250 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:31 compute-0 openstack_network_exporter[205720]: ERROR   21:48:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:48:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:48:31 compute-0 openstack_network_exporter[205720]: ERROR   21:48:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:48:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:48:31 compute-0 nova_compute[186018]: 2026-01-05 21:48:31.682 186022 DEBUG nova.virt.driver [-] Emitting event <LifecycleEvent: 1767649696.680642, 66b489b4-d427-4eb3-b712-aa91b1410874 => Stopped> emit_event /usr/lib/python3.9/site-packages/nova/virt/driver.py:1653
Jan 05 21:48:31 compute-0 nova_compute[186018]: 2026-01-05 21:48:31.682 186022 INFO nova.compute.manager [-] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] VM Stopped (Lifecycle Event)
Jan 05 21:48:31 compute-0 nova_compute[186018]: 2026-01-05 21:48:31.704 186022 DEBUG nova.compute.manager [None req-340b94f7-1d4f-4742-a332-e4ca0405761c - - - - - -] [instance: 66b489b4-d427-4eb3-b712-aa91b1410874] Checking state _get_power_state /usr/lib/python3.9/site-packages/nova/compute/manager.py:1762
Jan 05 21:48:31 compute-0 nova_compute[186018]: 2026-01-05 21:48:31.719 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:32 compute-0 ovn_controller[98229]: 2026-01-05T21:48:32Z|00182|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:48:32 compute-0 nova_compute[186018]: 2026-01-05 21:48:32.867 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:33 compute-0 nova_compute[186018]: 2026-01-05 21:48:33.455 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:34 compute-0 podman[258977]: 2026-01-05 21:48:34.789060665 +0000 UTC m=+0.112290603 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, managed_by=edpm_ansible, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_id=ceilometer_agent_ipmi)
Jan 05 21:48:34 compute-0 podman[258976]: 2026-01-05 21:48:34.816848717 +0000 UTC m=+0.149451499 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, build-date=2024-09-18T21:23:30, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, distribution-scope=public, io.k8s.display-name=Red Hat Universal Base Image 9, vendor=Red Hat, Inc., release-0.7.12=, release=1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_id=kepler, vcs-type=git, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, name=ubi9, com.redhat.component=ubi9-container, maintainer=Red Hat, Inc., io.buildah.version=1.29.0, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., architecture=x86_64, container_name=kepler, io.openshift.tags=base rhel9)
Jan 05 21:48:36 compute-0 nova_compute[186018]: 2026-01-05 21:48:36.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:48:36 compute-0 nova_compute[186018]: 2026-01-05 21:48:36.462 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:48:36 compute-0 ovn_controller[98229]: 2026-01-05T21:48:36Z|00183|binding|INFO|Releasing lport c3e05f88-97c2-469c-81f3-d52dff3918b2 from this chassis (sb_readonly=0)
Jan 05 21:48:36 compute-0 nova_compute[186018]: 2026-01-05 21:48:36.639 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:36 compute-0 nova_compute[186018]: 2026-01-05 21:48:36.722 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:37 compute-0 nova_compute[186018]: 2026-01-05 21:48:37.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:48:37 compute-0 nova_compute[186018]: 2026-01-05 21:48:37.484 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:48:37 compute-0 nova_compute[186018]: 2026-01-05 21:48:37.484 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:48:37 compute-0 nova_compute[186018]: 2026-01-05 21:48:37.485 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.000s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:48:37 compute-0 nova_compute[186018]: 2026-01-05 21:48:37.485 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:48:37 compute-0 nova_compute[186018]: 2026-01-05 21:48:37.588 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:48:37 compute-0 nova_compute[186018]: 2026-01-05 21:48:37.657 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.069s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:48:37 compute-0 nova_compute[186018]: 2026-01-05 21:48:37.659 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:48:37 compute-0 podman[259016]: 2026-01-05 21:48:37.676668833 +0000 UTC m=+0.114486947 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.4, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_managed=true, config_id=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, org.label-schema.vendor=CentOS, tcib_build_tag=9d61202dec2d131dec612b9e8291355e)
Jan 05 21:48:37 compute-0 nova_compute[186018]: 2026-01-05 21:48:37.730 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.071s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.163 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.165 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5113MB free_disk=72.31593322753906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.165 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.166 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.253 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.254 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.254 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.268 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing inventories for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:804
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.294 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating ProviderTree inventory for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 from _refresh_and_get_inventory using data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} _refresh_and_get_inventory /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:768
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.295 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Updating inventory in ProviderTree for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 with inventory: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:176
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.314 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing aggregate associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, aggregates: None _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:813
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.331 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Refreshing trait associations for resource provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7, traits: COMPUTE_NET_VIF_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_E1000,HW_CPU_X86_SVM,COMPUTE_NET_ATTACH_INTERFACE,HW_CPU_X86_FMA3,COMPUTE_VIOMMU_MODEL_INTEL,HW_CPU_X86_AESNI,HW_CPU_X86_BMI2,COMPUTE_NODE,HW_CPU_X86_BMI,HW_CPU_X86_SSSE3,COMPUTE_VIOMMU_MODEL_VIRTIO,COMPUTE_GRAPHICS_MODEL_VIRTIO,COMPUTE_NET_VIF_MODEL_SPAPR_VLAN,COMPUTE_TRUSTED_CERTS,COMPUTE_ACCELERATORS,COMPUTE_NET_VIF_MODEL_PCNET,HW_CPU_X86_SSE,COMPUTE_GRAPHICS_MODEL_NONE,COMPUTE_STORAGE_BUS_SATA,HW_CPU_X86_SHA,COMPUTE_IMAGE_TYPE_ARI,COMPUTE_SOCKET_PCI_NUMA_AFFINITY,COMPUTE_VOLUME_ATTACH_WITH_TAG,HW_CPU_X86_AVX,COMPUTE_IMAGE_TYPE_AKI,COMPUTE_RESCUE_BFV,COMPUTE_NET_VIF_MODEL_VMXNET3,COMPUTE_NET_VIF_MODEL_RTL8139,COMPUTE_SECURITY_TPM_1_2,HW_CPU_X86_AVX2,HW_CPU_X86_SSE41,COMPUTE_NET_VIF_MODEL_NE2K_PCI,COMPUTE_STORAGE_BUS_IDE,COMPUTE_NET_ATTACH_INTERFACE_WITH_TAG,COMPUTE_STORAGE_BUS_SCSI,COMPUTE_SECURITY_TPM_2_0,HW_CPU_X86_SSE42,HW_CPU_X86_AMD_SVM,COMPUTE_VOLUME_EXTEND,HW_CPU_X86_MMX,COMPUTE_DEVICE_TAGGING,COMPUTE_GRAPHICS_MODEL_BOCHS,COMPUTE_STORAGE_BUS_USB,COMPUTE_VIOMMU_MODEL_AUTO,COMPUTE_IMAGE_TYPE_QCOW2,HW_CPU_X86_ABM,COMPUTE_VOLUME_MULTI_ATTACH,HW_CPU_X86_CLMUL,COMPUTE_NET_VIF_MODEL_E1000E,COMPUTE_SECURITY_UEFI_SECURE_BOOT,COMPUTE_IMAGE_TYPE_ISO,COMPUTE_GRAPHICS_MODEL_VGA,HW_CPU_X86_F16C,HW_CPU_X86_SSE4A,COMPUTE_IMAGE_TYPE_AMI,COMPUTE_GRAPHICS_MODEL_CIRRUS,COMPUTE_STORAGE_BUS_FDC,HW_CPU_X86_SSE2,COMPUTE_IMAGE_TYPE_RAW,COMPUTE_STORAGE_BUS_VIRTIO _refresh_associations /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:825
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.377 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.392 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.424 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.425 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.259s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:48:38 compute-0 nova_compute[186018]: 2026-01-05 21:48:38.458 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:39 compute-0 nova_compute[186018]: 2026-01-05 21:48:39.420 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:48:41 compute-0 nova_compute[186018]: 2026-01-05 21:48:41.726 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:42 compute-0 nova_compute[186018]: 2026-01-05 21:48:42.459 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:48:42 compute-0 nova_compute[186018]: 2026-01-05 21:48:42.460 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:48:42 compute-0 nova_compute[186018]: 2026-01-05 21:48:42.460 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:48:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:42.890 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:48:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:42.891 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:48:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:48:42.892 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:48:43 compute-0 nova_compute[186018]: 2026-01-05 21:48:43.054 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:48:43 compute-0 nova_compute[186018]: 2026-01-05 21:48:43.055 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:48:43 compute-0 nova_compute[186018]: 2026-01-05 21:48:43.056 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:48:43 compute-0 nova_compute[186018]: 2026-01-05 21:48:43.057 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62f57876-af2d-4771-bffd-c87b7755cc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:48:43 compute-0 nova_compute[186018]: 2026-01-05 21:48:43.460 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:44 compute-0 nova_compute[186018]: 2026-01-05 21:48:44.597 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updating instance_info_cache with network_info: [{"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:48:44 compute-0 nova_compute[186018]: 2026-01-05 21:48:44.629 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:48:44 compute-0 nova_compute[186018]: 2026-01-05 21:48:44.629 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:48:44 compute-0 nova_compute[186018]: 2026-01-05 21:48:44.630 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:48:45 compute-0 nova_compute[186018]: 2026-01-05 21:48:45.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:48:46 compute-0 nova_compute[186018]: 2026-01-05 21:48:46.457 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._sync_scheduler_instance_info run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:48:46 compute-0 nova_compute[186018]: 2026-01-05 21:48:46.729 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:46 compute-0 podman[259043]: 2026-01-05 21:48:46.775923628 +0000 UTC m=+0.110008196 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, architecture=x86_64, version=9.6, io.buildah.version=1.33.7, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., container_name=openstack_network_exporter, io.openshift.expose-services=, url=https://catalog.redhat.com/en/search?searchType=containers, com.redhat.component=ubi9-minimal-container, config_id=openstack_network_exporter, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, name=ubi9-minimal, vcs-type=git, io.openshift.tags=minimal rhel9, distribution-scope=public, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., maintainer=Red Hat, Inc., release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, vendor=Red Hat, Inc., managed_by=edpm_ansible, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']})
Jan 05 21:48:46 compute-0 podman[259042]: 2026-01-05 21:48:46.832024018 +0000 UTC m=+0.174213293 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.41.3, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, org.label-schema.build-date=20251202, config_id=ovn_controller, managed_by=edpm_ansible, maintainer=OpenStack Kubernetes Operator team, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0)
Jan 05 21:48:47 compute-0 nova_compute[186018]: 2026-01-05 21:48:47.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:48:48 compute-0 nova_compute[186018]: 2026-01-05 21:48:48.462 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:49 compute-0 nova_compute[186018]: 2026-01-05 21:48:49.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:48:50 compute-0 nova_compute[186018]: 2026-01-05 21:48:50.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:48:51 compute-0 podman[259093]: 2026-01-05 21:48:51.732802544 +0000 UTC m=+0.078755473 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>)
Jan 05 21:48:51 compute-0 nova_compute[186018]: 2026-01-05 21:48:51.733 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:51 compute-0 podman[259092]: 2026-01-05 21:48:51.776848411 +0000 UTC m=+0.116920428 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, tcib_managed=true, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, config_id=ovn_metadata_agent, org.label-schema.license=GPLv2, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, maintainer=OpenStack Kubernetes Operator team)
Jan 05 21:48:53 compute-0 nova_compute[186018]: 2026-01-05 21:48:53.465 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:56 compute-0 nova_compute[186018]: 2026-01-05 21:48:56.736 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:58 compute-0 nova_compute[186018]: 2026-01-05 21:48:58.468 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:48:58 compute-0 podman[259132]: 2026-01-05 21:48:58.761696518 +0000 UTC m=+0.095930174 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:48:59 compute-0 podman[202426]: time="2026-01-05T21:48:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:48:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:48:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:48:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:48:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4372 "" "Go-http-client/1.1"
Jan 05 21:49:01 compute-0 openstack_network_exporter[205720]: ERROR   21:49:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:49:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:49:01 compute-0 openstack_network_exporter[205720]: ERROR   21:49:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:49:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:49:01 compute-0 nova_compute[186018]: 2026-01-05 21:49:01.738 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:03 compute-0 nova_compute[186018]: 2026-01-05 21:49:03.471 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:05 compute-0 podman[259157]: 2026-01-05 21:49:05.814010905 +0000 UTC m=+0.147248535 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_ipmi, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, tcib_managed=true, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2)
Jan 05 21:49:05 compute-0 podman[259156]: 2026-01-05 21:49:05.838903812 +0000 UTC m=+0.179793856 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, release-0.7.12=, release=1214.1726694543, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, maintainer=Red Hat, Inc., url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, io.k8s.display-name=Red Hat Universal Base Image 9, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, version=9.4, com.redhat.component=ubi9-container, config_id=kepler, description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., name=ubi9, container_name=kepler, io.openshift.tags=base rhel9, architecture=x86_64, distribution-scope=public, io.buildah.version=1.29.0, managed_by=edpm_ansible, build-date=2024-09-18T21:23:30, io.openshift.expose-services=, summary=Provides the latest release of Red Hat Universal Base Image 9., vcs-type=git, vendor=Red Hat, Inc., io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:49:06 compute-0 nova_compute[186018]: 2026-01-05 21:49:06.743 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:07 compute-0 ovn_controller[98229]: 2026-01-05T21:49:07Z|00184|memory_trim|INFO|Detected inactivity (last active 30013 ms ago): trimming memory
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.794 14 DEBUG ceilometer.polling.manager [-] The number of pollsters in source [pollsters] is bigger than the number of worker threads to execute them. Therefore, one can expect the process to be longer than the expected. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:253
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.795 14 DEBUG ceilometer.polling.manager [-] Processing pollsters for [pollsters] with [1] threads. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:262
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.795 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.796 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.EphemeralSizePollster object at 0x7f163c67d820>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.797 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.798 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.799 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fa10>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.800 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.801 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.802 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67fb00>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.803 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.804 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.805 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163d133770>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.806 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.807 14 DEBUG ceilometer.polling.manager [-] Registering pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] from source [pollsters] to be executed via executor [<concurrent.futures.thread.ThreadPoolExecutor object at 0x7f163d10d370>] with cache [{}], pollster history [{}], and discovery cache [{}]. register_pollster_execution /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:276
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.808 14 DEBUG ceilometer.compute.discovery [-] instance data: {'id': '62f57876-af2d-4771-bffd-c87b7755cc5c', 'name': 'tempest-AttachInterfacesUnderV243Test-server-306597775', 'flavor': {'id': 'ce1138a2-4b82-4664-8860-711a956c0882', 'name': 'm1.nano', 'vcpus': 1, 'ram': 128, 'disk': 1, 'ephemeral': 0, 'swap': 0}, 'image': {'id': 'ebb2027f-05a6-465a-af75-b7da40a91332'}, 'os_type': 'hvm', 'architecture': 'x86_64', 'OS-EXT-SRV-ATTR:instance_name': 'instance-00000006', 'OS-EXT-SRV-ATTR:host': 'compute-0.ctlplane.example.com', 'OS-EXT-STS:vm_state': 'running', 'tenant_id': 'e0899289c7dd4631b4fa69150a914123', 'user_id': '168ad639a6ed41c8bd954c434807ef6c', 'hostId': 'c3f8712f401137fbbdc6483d36c041bcfcf3dfa8c8dce0a58aba2f1b', 'status': 'active', 'metadata': {}} discover_libvirt_polling /usr/lib/python3.12/site-packages/ceilometer/compute/discovery.py:315
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.809 14 INFO ceilometer.polling.manager [-] Polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.809 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.809 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d850>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.810 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.ephemeral.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.811 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.ephemeral.size in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.811 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingPacketsPollster object at 0x7f163c67f8c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.811 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.812 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.812 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c050>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.812 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.812 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.ephemeral.size (2026-01-05T21:49:07.809869) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.813 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets (2026-01-05T21:49:07.812464) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.818 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.819 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.819 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingDropPollster object at 0x7f163c67d880>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.820 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.820 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.820 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e060>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.820 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.821 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.821 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.drop (2026-01-05T21:49:07.820829) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.822 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.drop in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.822 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingPacketsPollster object at 0x7f163c67f860>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.822 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.823 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.823 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f890>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.823 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.823 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets volume: 28 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.824 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.825 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.RootSizePollster object at 0x7f163c67c0b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.825 14 INFO ceilometer.polling.manager [-] Polling pollster disk.root.size in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.826 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.826 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c0e0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.827 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.root.size heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.827 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.root.size in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.828 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets (2026-01-05T21:49:07.823492) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.828 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesPollster object at 0x7f163c67fad0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.828 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.root.size (2026-01-05T21:49:07.827023) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.828 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.828 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.829 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f8f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.829 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.829 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes volume: 3390 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.830 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes (2026-01-05T21:49:07.829483) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.830 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.831 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesDeltaPollster object at 0x7f163c67f950>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.831 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.831 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.831 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67f980>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.831 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.832 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.832 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.bytes.delta in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.833 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingBytesRatePollster object at 0x7f163c67f9e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.833 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.outgoing.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.833 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingErrorsPollster object at 0x7f163c67fa70>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.834 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.834 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.834 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.bytes.delta (2026-01-05T21:49:07.831810) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.834 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2c0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.834 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.834 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.835 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.error in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.836 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.OutgoingErrorsPollster object at 0x7f163c67e2a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.836 14 INFO ceilometer.polling.manager [-] Polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.836 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.error (2026-01-05T21:49:07.834700) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.836 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.836 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e2d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.837 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.outgoing.packets.error heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.837 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.outgoing.packets.error volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.838 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.outgoing.packets.error in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.838 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.outgoing.packets.error (2026-01-05T21:49:07.837092) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.838 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceAllocationPollster object at 0x7f163f5249b0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.839 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.839 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.839 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67c2f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.839 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.allocation heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.839 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.allocation (2026-01-05T21:49:07.839555) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.862 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 30089216 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.862 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.allocation volume: 512000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.863 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.allocation in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.863 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesDeltaPollster object at 0x7f163c67dd90>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.864 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.864 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.864 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163e5b82f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.864 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes.delta heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.865 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes.delta volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.866 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes.delta in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.866 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesRatePollster object at 0x7f163c67ddf0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.867 14 DEBUG ceilometer.polling.manager [-] Skip pollster network.incoming.bytes.rate, no new resources found this cycle _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:321
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.867 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes.delta (2026-01-05T21:49:07.864869) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.867 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingDropPollster object at 0x7f163c67c260>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.867 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.868 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.868 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dbb0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.868 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.packets.drop heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.869 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.packets.drop volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.869 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.packets.drop in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.870 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.packets.drop (2026-01-05T21:49:07.868660) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.870 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.MemoryUsagePollster object at 0x7f163c67dd30>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.870 14 INFO ceilometer.polling.manager [-] Polling pollster memory.usage in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.870 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.871 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67dd60>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.871 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: memory.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.872 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for memory.usage (2026-01-05T21:49:07.871611) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.894 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/memory.usage volume: 42.60546875 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.894 14 INFO ceilometer.polling.manager [-] Finished polling pollster memory.usage in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.894 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.PowerStatePollster object at 0x7f163c67e540>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.895 14 INFO ceilometer.polling.manager [-] Polling pollster power.state in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.895 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.895 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67e570>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.895 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: power.state heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.895 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/power.state volume: 1 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.895 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for power.state (2026-01-05T21:49:07.895447) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.896 14 INFO ceilometer.polling.manager [-] Finished polling pollster power.state in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.896 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceCapacityPollster object at 0x7f163c67cb60>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.896 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.896 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.896 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d580>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.896 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.capacity heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.897 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 1073741824 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.897 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.capacity (2026-01-05T21:49:07.896836) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.897 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.capacity volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.897 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.capacity in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.898 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadBytesPollster object at 0x7f163c67d550>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.898 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.898 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.898 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d5b0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.898 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.899 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.bytes (2026-01-05T21:49:07.898744) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.940 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 31029760 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.940 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.bytes volume: 299326 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.941 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.bytes in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.941 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.net.IncomingBytesPollster object at 0x7f163d0f6270>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.942 14 INFO ceilometer.polling.manager [-] Polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.942 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.942 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67ddc0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.943 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for network.incoming.bytes (2026-01-05T21:49:07.942730) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.942 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: network.incoming.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.943 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/network.incoming.bytes volume: 4311 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.944 14 INFO ceilometer.polling.manager [-] Finished polling pollster network.incoming.bytes in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.944 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskReadLatencyPollster object at 0x7f163c67d5e0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.945 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.945 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.945 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d610>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.946 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.latency (2026-01-05T21:49:07.945851) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.945 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.946 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 519177861 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.946 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.latency volume: 51692234 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.947 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.latency in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.948 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceReadRequestsPollster object at 0x7f163c67d640>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.948 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.948 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.948 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d670>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.949 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.read.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.949 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 1138 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.949 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.read.requests (2026-01-05T21:49:07.948934) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.950 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.read.requests volume: 120 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.950 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.read.requests in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.950 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDevicePhysicalPollster object at 0x7f163c67d6a0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.951 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.usage in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.951 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.951 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d6d0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.951 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.usage heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.951 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 29949952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.952 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.usage (2026-01-05T21:49:07.951656) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.952 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.usage volume: 509952 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.953 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.usage in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.953 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteBytesPollster object at 0x7f163c67d700>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.953 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.953 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.954 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d730>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.954 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.bytes heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.954 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 73068544 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.954 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.bytes volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.955 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.bytes (2026-01-05T21:49:07.954163) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.955 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.bytes in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.956 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.instance_stats.CPUPollster object at 0x7f163c67d910>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.956 14 INFO ceilometer.polling.manager [-] Polling pollster cpu in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.956 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163d133770>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.956 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163d133770>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.957 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: cpu heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.957 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/cpu volume: 47680000000 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.957 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for cpu (2026-01-05T21:49:07.957132) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.958 14 INFO ceilometer.polling.manager [-] Finished polling pollster cpu in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.958 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceDiskWriteLatencyPollster object at 0x7f163c67d760>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.958 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.958 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.959 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d790>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.959 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.latency heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.959 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 13557622904 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.959 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.latency (2026-01-05T21:49:07.959291) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.960 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.latency volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.960 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.latency in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.961 14 DEBUG ceilometer.polling.manager [-] Executing discovery process for pollsters [<ceilometer.compute.pollsters.disk.PerDeviceWriteRequestsPollster object at 0x7f163c67d7c0>] and discovery method [local_instances] via process [<bound method AgentManager.discover of <ceilometer.polling.manager.AgentManager object at 0x7f163cff3e90>>]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:294
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.961 14 INFO ceilometer.polling.manager [-] Polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.961 14 DEBUG ceilometer.polling.manager [-] Checking if we need coordination for pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] with coordination group name [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:333
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.961 14 DEBUG ceilometer.polling.manager [-] The pollster [<stevedore.extension.Extension object at 0x7f163c67d7f0>] is not configured in a source for polling that requires coordination. The current hashrings are the following [None]. _internal_pollster_run /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:355
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.962 14 DEBUG ceilometer.polling.manager [-] Polster heartbeat update: disk.device.write.requests heartbeat /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:636
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.962 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 328 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.962 12 DEBUG ceilometer.polling.manager [-] Updated heartbeat for disk.device.write.requests (2026-01-05T21:49:07.961950) _update_status /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:502
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.962 14 DEBUG ceilometer.compute.pollsters [-] 62f57876-af2d-4771-bffd-c87b7755cc5c/disk.device.write.requests volume: 0 _stats_to_sample /usr/lib/python3.12/site-packages/ceilometer/compute/pollsters/__init__.py:108
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.963 14 INFO ceilometer.polling.manager [-] Finished polling pollster disk.device.write.requests in the context of pollsters
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.963 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.ephemeral.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.root.size]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.outgoing.packets.error]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.allocation]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.delta]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes.rate]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.packets.drop]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [memory.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.964 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [power.state]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.capacity]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [network.incoming.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.read.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.usage]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.bytes]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [cpu]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.latency]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:07 compute-0 ceilometer_agent_compute[195874]: 2026-01-05 21:49:07.965 14 DEBUG ceilometer.polling.manager [-] Finished processing pollster [disk.device.write.requests]. execute_polling_task_processing /usr/lib/python3.12/site-packages/ceilometer/polling/manager.py:272
Jan 05 21:49:08 compute-0 nova_compute[186018]: 2026-01-05 21:49:08.473 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:08 compute-0 podman[259195]: 2026-01-05 21:49:08.78842716 +0000 UTC m=+0.122717697 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, container_name=ceilometer_agent_compute, org.label-schema.build-date=20251224, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, managed_by=edpm_ansible, tcib_managed=true, maintainer=OpenStack Kubernetes Operator team, org.label-schema.license=GPLv2, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, config_id=ceilometer_agent_compute, io.buildah.version=1.41.4, org.label-schema.schema-version=1.0)
Jan 05 21:49:11 compute-0 nova_compute[186018]: 2026-01-05 21:49:11.747 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:13 compute-0 nova_compute[186018]: 2026-01-05 21:49:13.475 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:16 compute-0 nova_compute[186018]: 2026-01-05 21:49:16.751 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:17 compute-0 podman[259217]: 2026-01-05 21:49:17.76962837 +0000 UTC m=+0.112960953 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, build-date=2025-08-20T13:12:41, vcs-type=git, architecture=x86_64, com.redhat.component=ubi9-minimal-container, version=9.6, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., name=ubi9-minimal, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., release=1755695350, url=https://catalog.redhat.com/en/search?searchType=containers, managed_by=edpm_ansible, container_name=openstack_network_exporter, io.openshift.tags=minimal rhel9, vendor=Red Hat, Inc., io.openshift.expose-services=)
Jan 05 21:49:17 compute-0 podman[259216]: 2026-01-05 21:49:17.799127272 +0000 UTC m=+0.146366799 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, config_id=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, tcib_managed=true, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.vendor=CentOS, container_name=ovn_controller, io.buildah.version=1.41.3, managed_by=edpm_ansible)
Jan 05 21:49:18 compute-0 nova_compute[186018]: 2026-01-05 21:49:18.477 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:21 compute-0 nova_compute[186018]: 2026-01-05 21:49:21.755 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:22 compute-0 podman[259262]: 2026-01-05 21:49:22.753070185 +0000 UTC m=+0.107950616 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, tcib_managed=true, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.license=GPLv2, org.label-schema.vendor=CentOS, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, org.label-schema.name=CentOS Stream 9 Base Image, io.buildah.version=1.41.3, org.label-schema.schema-version=1.0, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb)
Jan 05 21:49:22 compute-0 podman[259263]: 2026-01-05 21:49:22.754376744 +0000 UTC m=+0.095825002 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter)
Jan 05 21:49:23 compute-0 nova_compute[186018]: 2026-01-05 21:49:23.480 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:26 compute-0 nova_compute[186018]: 2026-01-05 21:49:26.758 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:28 compute-0 nova_compute[186018]: 2026-01-05 21:49:28.482 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:29 compute-0 podman[259305]: 2026-01-05 21:49:29.734254044 +0000 UTC m=+0.092924457 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_id=node_exporter, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']})
Jan 05 21:49:29 compute-0 podman[202426]: time="2026-01-05T21:49:29Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:49:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:49:29 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:49:29 compute-0 podman[202426]: @ - - [05/Jan/2026:21:49:29 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Jan 05 21:49:31 compute-0 openstack_network_exporter[205720]: ERROR   21:49:31 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:49:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:49:31 compute-0 openstack_network_exporter[205720]: ERROR   21:49:31 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:49:31 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:49:31 compute-0 nova_compute[186018]: 2026-01-05 21:49:31.761 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:33 compute-0 nova_compute[186018]: 2026-01-05 21:49:33.485 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:36 compute-0 podman[259327]: 2026-01-05 21:49:36.764941588 +0000 UTC m=+0.111268853 container health_status cb246dada680dd23c92505c1959718bbc68e3182dea5cee76503f9c8a435237f (image=quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified, name=ceilometer_agent_ipmi, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.vendor=CentOS, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.build-date=20251202, org.label-schema.schema-version=1.0, container_name=ceilometer_agent_ipmi, org.label-schema.name=CentOS Stream 9 Base Image, tcib_managed=true, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e-7348ce2afddc5761f77e9511231e479ec0a77902488e71ba3ef9ae006688402e'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi', 'test': '/openstack/healthcheck ipmi'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ceilometer-ipmi:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry-power-monitoring:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_ipmi.json:/var/lib/kolla/config_files/config.json:z', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry-power-monitoring/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry-power-monitoring/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry-power-monitoring/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_ipmi:/openstack:ro,z']}, config_id=ceilometer_agent_ipmi, org.label-schema.license=GPLv2)
Jan 05 21:49:36 compute-0 nova_compute[186018]: 2026-01-05 21:49:36.764 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:36 compute-0 podman[259326]: 2026-01-05 21:49:36.790978179 +0000 UTC m=+0.130587008 container health_status ca49757bcd2d8d66bc9405358c10c962760c86d677efe831002631896f30b928 (image=quay.io/sustainable_computing_io/kepler:release-0.7.12, name=kepler, health_status=healthy, health_failing_streak=0, health_log=, version=9.4, name=ubi9, url=https://access.redhat.com/containers/#/registry.access.redhat.com/ubi9/images/9.4-1214.1726694543, config_data={'command': '-v=2', 'environment': {'ENABLE_GPU': 'true', 'ENABLE_PROCESS_METRICS': 'true', 'EXPOSE_CONTAINER_METRICS': 'true', 'EXPOSE_ESTIMATED_IDLE_POWER_METRICS': 'false', 'EXPOSE_VM_METRICS': 'true', 'LIBVIRT_METADATA_URI': 'http://openstack.org/xmlns/libvirt/nova/1.1'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/kepler', 'test': '/openstack/healthcheck kepler'}, 'image': 'quay.io/sustainable_computing_io/kepler:release-0.7.12', 'net': 'host', 'ports': ['8888:8888'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/lib/modules:/lib/modules:ro', '/run/libvirt:/run/libvirt:shared,ro', '/sys:/sys', '/proc:/proc', '/var/lib/openstack/healthchecks/kepler:/openstack:ro,z']}, io.openshift.expose-services=, maintainer=Red Hat, Inc., distribution-scope=public, io.openshift.tags=base rhel9, release=1214.1726694543, build-date=2024-09-18T21:23:30, summary=Provides the latest release of Red Hat Universal Base Image 9., description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, release-0.7.12=, com.redhat.component=ubi9-container, container_name=kepler, io.k8s.display-name=Red Hat Universal Base Image 9, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, vcs-type=git, architecture=x86_64, vcs-ref=e309397d02fc53f7fa99db1371b8700eb49f268f, config_id=kepler, vendor=Red Hat, Inc., io.buildah.version=1.29.0, io.k8s.description=The Universal Base Image is designed and engineered to be the base layer for all of your containerized applications, middleware and utilities. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly.)
Jan 05 21:49:38 compute-0 nova_compute[186018]: 2026-01-05 21:49:38.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._reclaim_queued_deletes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:49:38 compute-0 nova_compute[186018]: 2026-01-05 21:49:38.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CONF.reclaim_instance_interval <= 0, skipping... _reclaim_queued_deletes /usr/lib/python3.9/site-packages/nova/compute/manager.py:10477
Jan 05 21:49:38 compute-0 nova_compute[186018]: 2026-01-05 21:49:38.487 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:39 compute-0 nova_compute[186018]: 2026-01-05 21:49:39.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager.update_available_resource run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:49:39 compute-0 podman[259365]: 2026-01-05 21:49:39.772848242 +0000 UTC m=+0.117314500 container health_status dc5f30b3cefdb2e580a11421409834655208a44d8c6faf7746fcbee49cccf8b2 (image=quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested, name=ceilometer_agent_compute, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': 'kolla_start', 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ceilometer_agent_compute', 'test': '/openstack/healthcheck compute'}, 'image': 'quay.rdoproject.org/podified-master-centos10/openstack-ceilometer-compute:current-tested', 'net': 'host', 'restart': 'always', 'security_opt': 'label:type:ceilometer_polling_t', 'user': 'ceilometer', 'volumes': ['/var/lib/openstack/telemetry:/var/lib/kolla/config_files/src:z', '/var/lib/kolla/config_files/ceilometer_agent_compute.json:/var/lib/kolla/config_files/config.json:z', '/run/libvirt:/run/libvirt:shared,ro', '/etc/hosts:/etc/hosts:ro', '/etc/pki/tls/certs/ca-bundle.trust.crt:/etc/pki/tls/certs/ca-bundle.trust.crt:ro', '/etc/localtime:/etc/localtime:ro', '/etc/pki/ca-trust/source/anchors:/etc/pki/ca-trust/source/anchors:ro', '/var/lib/openstack/cacerts/telemetry/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/telemetry/ceilometer_prom_exporter.yaml:/etc/ceilometer/ceilometer_prom_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/ceilometer/tls:z', '/dev/log:/dev/log', '/var/lib/openstack/healthchecks/ceilometer_agent_compute:/openstack:ro,z']}, container_name=ceilometer_agent_compute, maintainer=OpenStack Kubernetes Operator team, managed_by=edpm_ansible, org.label-schema.schema-version=1.0, config_id=ceilometer_agent_compute, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 10 Base Image, tcib_build_tag=9d61202dec2d131dec612b9e8291355e, tcib_managed=true, io.buildah.version=1.41.4, org.label-schema.build-date=20251224, org.label-schema.vendor=CentOS)
Jan 05 21:49:39 compute-0 nova_compute[186018]: 2026-01-05 21:49:39.820 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:49:39 compute-0 nova_compute[186018]: 2026-01-05 21:49:39.821 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:49:39 compute-0 nova_compute[186018]: 2026-01-05 21:49:39.822 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker.clean_compute_node_cache" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:49:39 compute-0 nova_compute[186018]: 2026-01-05 21:49:39.822 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Auditing locally available compute resources for compute-0.ctlplane.example.com (node: compute-0.ctlplane.example.com) update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:861
Jan 05 21:49:40 compute-0 nova_compute[186018]: 2026-01-05 21:49:40.000 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:49:40 compute-0 nova_compute[186018]: 2026-01-05 21:49:40.100 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:49:40 compute-0 nova_compute[186018]: 2026-01-05 21:49:40.101 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running cmd (subprocess): /usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:384
Jan 05 21:49:40 compute-0 nova_compute[186018]: 2026-01-05 21:49:40.201 186022 DEBUG oslo_concurrency.processutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] CMD "/usr/bin/python3 -m oslo_concurrency.prlimit --as=1073741824 --cpu=30 -- env LC_ALL=C LANG=C qemu-img info /var/lib/nova/instances/62f57876-af2d-4771-bffd-c87b7755cc5c/disk --force-share --output=json" returned: 0 in 0.100s execute /usr/lib/python3.9/site-packages/oslo_concurrency/processutils.py:422
Jan 05 21:49:40 compute-0 nova_compute[186018]: 2026-01-05 21:49:40.554 186022 WARNING nova.virt.libvirt.driver [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] This host appears to have multiple sockets per NUMA node. The `socket` PCI NUMA affinity will not be supported.
Jan 05 21:49:40 compute-0 nova_compute[186018]: 2026-01-05 21:49:40.556 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Hypervisor/Node resource view: name=compute-0.ctlplane.example.com free_ram=5111MB free_disk=72.31593322753906GB free_vcpus=7 pci_devices=[{"dev_id": "pci_0000_00_01_3", "address": "0000:00:01.3", "product_id": "7113", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7113", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_07_0", "address": "0000:00:07.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_2", "address": "0000:00:01.2", "product_id": "7020", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7020", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_02_0", "address": "0000:00:02.0", "product_id": "1050", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1050", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_03_0", "address": "0000:00:03.0", "product_id": "1000", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_0", "address": "0000:00:01.0", "product_id": "7000", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7000", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_04_0", "address": "0000:00:04.0", "product_id": "1001", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1001", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_01_1", "address": "0000:00:01.1", "product_id": "7010", "vendor_id": "8086", "numa_node": null, "label": "label_8086_7010", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_00_0", "address": "0000:00:00.0", "product_id": "1237", "vendor_id": "8086", "numa_node": null, "label": "label_8086_1237", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_06_0", "address": "0000:00:06.0", "product_id": "1005", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1005", "dev_type": "type-PCI"}, {"dev_id": "pci_0000_00_05_0", "address": "0000:00:05.0", "product_id": "1002", "vendor_id": "1af4", "numa_node": null, "label": "label_1af4_1002", "dev_type": "type-PCI"}] _report_hypervisor_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1034
Jan 05 21:49:40 compute-0 nova_compute[186018]: 2026-01-05 21:49:40.556 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "compute_resources" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:49:40 compute-0 nova_compute[186018]: 2026-01-05 21:49:40.557 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" acquired by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:49:40 compute-0 nova_compute[186018]: 2026-01-05 21:49:40.633 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Instance 62f57876-af2d-4771-bffd-c87b7755cc5c actively managed on this compute host and has allocations in placement: {'resources': {'DISK_GB': 1, 'MEMORY_MB': 128, 'VCPU': 1}}. _remove_deleted_instances_allocations /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1635
Jan 05 21:49:40 compute-0 nova_compute[186018]: 2026-01-05 21:49:40.633 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Total usable vcpus: 8, total allocated vcpus: 1 _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1057
Jan 05 21:49:40 compute-0 nova_compute[186018]: 2026-01-05 21:49:40.634 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Final resource view: name=compute-0.ctlplane.example.com phys_ram=7679MB used_ram=640MB phys_disk=79GB used_disk=1GB total_vcpus=8 used_vcpus=1 pci_stats=[] _report_final_resource_view /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:1066
Jan 05 21:49:40 compute-0 nova_compute[186018]: 2026-01-05 21:49:40.674 186022 DEBUG nova.compute.provider_tree [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed in ProviderTree for provider: 98d67ab0-e613-4c26-9eaa-22cf91b060a7 update_inventory /usr/lib/python3.9/site-packages/nova/compute/provider_tree.py:180
Jan 05 21:49:40 compute-0 nova_compute[186018]: 2026-01-05 21:49:40.691 186022 DEBUG nova.scheduler.client.report [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Inventory has not changed for provider 98d67ab0-e613-4c26-9eaa-22cf91b060a7 based on inventory data: {'VCPU': {'total': 8, 'reserved': 0, 'min_unit': 1, 'max_unit': 8, 'step_size': 1, 'allocation_ratio': 4.0}, 'MEMORY_MB': {'total': 7679, 'reserved': 512, 'min_unit': 1, 'max_unit': 7679, 'step_size': 1, 'allocation_ratio': 1.0}, 'DISK_GB': {'total': 79, 'reserved': 1, 'min_unit': 1, 'max_unit': 79, 'step_size': 1, 'allocation_ratio': 0.9}} set_inventory_for_provider /usr/lib/python3.9/site-packages/nova/scheduler/client/report.py:940
Jan 05 21:49:40 compute-0 nova_compute[186018]: 2026-01-05 21:49:40.694 186022 DEBUG nova.compute.resource_tracker [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Compute_service record updated for compute-0.ctlplane.example.com:compute-0.ctlplane.example.com _update_available_resource /usr/lib/python3.9/site-packages/nova/compute/resource_tracker.py:995
Jan 05 21:49:40 compute-0 nova_compute[186018]: 2026-01-05 21:49:40.694 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lock "compute_resources" "released" by "nova.compute.resource_tracker.ResourceTracker._update_available_resource" :: held 0.137s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:49:41 compute-0 nova_compute[186018]: 2026-01-05 21:49:41.689 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._check_instance_build_time run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:49:41 compute-0 nova_compute[186018]: 2026-01-05 21:49:41.768 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:42 compute-0 nova_compute[186018]: 2026-01-05 21:49:42.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._heal_instance_info_cache run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:49:42 compute-0 nova_compute[186018]: 2026-01-05 21:49:42.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Starting heal instance info cache _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9858
Jan 05 21:49:42 compute-0 nova_compute[186018]: 2026-01-05 21:49:42.461 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Rebuilding the list of instances to heal _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9862
Jan 05 21:49:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:49:42.891 107689 DEBUG oslo_concurrency.lockutils [-] Acquiring lock "_check_child_processes" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:404
Jan 05 21:49:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:49:42.892 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" acquired by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: waited 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:409
Jan 05 21:49:42 compute-0 ovn_metadata_agent[107684]: 2026-01-05 21:49:42.892 107689 DEBUG oslo_concurrency.lockutils [-] Lock "_check_child_processes" "released" by "neutron.agent.linux.external_process.ProcessMonitor._check_child_processes" :: held 0.001s inner /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:423
Jan 05 21:49:43 compute-0 nova_compute[186018]: 2026-01-05 21:49:43.112 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquiring lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:312
Jan 05 21:49:43 compute-0 nova_compute[186018]: 2026-01-05 21:49:43.112 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Acquired lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:315
Jan 05 21:49:43 compute-0 nova_compute[186018]: 2026-01-05 21:49:43.112 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Forcefully refreshing network info cache for instance _get_instance_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:2004
Jan 05 21:49:43 compute-0 nova_compute[186018]: 2026-01-05 21:49:43.113 186022 DEBUG nova.objects.instance [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Lazy-loading 'info_cache' on Instance uuid 62f57876-af2d-4771-bffd-c87b7755cc5c obj_load_attr /usr/lib/python3.9/site-packages/nova/objects/instance.py:1105
Jan 05 21:49:43 compute-0 nova_compute[186018]: 2026-01-05 21:49:43.494 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:45 compute-0 sshd-session[259389]: Accepted publickey for zuul from 192.168.122.10 port 37358 ssh2: ECDSA SHA256:IlVWKy/HlVJ6unwGDFRcOMnibLrbU+s1GE3mebSCROE
Jan 05 21:49:45 compute-0 systemd-logind[788]: New session 31 of user zuul.
Jan 05 21:49:45 compute-0 systemd[1]: Started Session 31 of User zuul.
Jan 05 21:49:45 compute-0 sshd-session[259389]: pam_unix(sshd:session): session opened for user zuul(uid=1000) by zuul(uid=0)
Jan 05 21:49:45 compute-0 sudo[259393]:     zuul : PWD=/home/zuul ; USER=root ; COMMAND=/bin/bash -c 'rm -rf /var/tmp/sos-osp && mkdir /var/tmp/sos-osp && sos report --batch --all-logs --tmp-dir=/var/tmp/sos-osp  -p container,openstack_edpm,system,storage,virt'
Jan 05 21:49:45 compute-0 sudo[259393]: pam_unix(sudo:session): session opened for user root(uid=0) by zuul(uid=1000)
Jan 05 21:49:45 compute-0 nova_compute[186018]: 2026-01-05 21:49:45.380 186022 DEBUG nova.network.neutron [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updating instance_info_cache with network_info: [{"id": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "address": "fa:16:3e:d3:0d:bf", "network": {"id": "33bcb7a6-33e4-40b9-bab8-4665cf65dcc5", "bridge": "br-int", "label": "tempest-AttachInterfacesUnderV243Test-1372767109-network", "subnets": [{"cidr": "10.100.0.0/28", "dns": [], "gateway": {"address": "10.100.0.1", "type": "gateway", "version": 4, "meta": {}}, "ips": [{"address": "10.100.0.10", "type": "fixed", "version": 4, "meta": {}, "floating_ips": []}, {"address": "10.100.0.6", "type": "fixed", "version": 4, "meta": {}, "floating_ips": [{"address": "192.168.122.236", "type": "floating", "version": 4, "meta": {}}]}], "routes": [], "version": 4, "meta": {"enable_dhcp": true}}], "meta": {"injected": false, "tenant_id": "e0899289c7dd4631b4fa69150a914123", "mtu": 1442, "physical_network": null, "tunneled": true}}, "type": "ovs", "details": {"port_filter": true, "connectivity": "l2", "bridge_name": "br-int", "datapath_type": "system", "bound_drivers": {"0": "ovn"}}, "devname": "tapa6acaedc-5f", "ovs_interfaceid": "a6acaedc-5f9d-4aca-9e6b-c69623601aca", "qbh_params": null, "qbg_params": null, "active": true, "vnic_type": "normal", "profile": {}, "preserve_on_delete": false, "delegate_create": true, "meta": {}}] update_instance_cache_with_nw_info /usr/lib/python3.9/site-packages/nova/network/neutron.py:116
Jan 05 21:49:45 compute-0 nova_compute[186018]: 2026-01-05 21:49:45.456 186022 DEBUG oslo_concurrency.lockutils [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Releasing lock "refresh_cache-62f57876-af2d-4771-bffd-c87b7755cc5c" lock /usr/lib/python3.9/site-packages/oslo_concurrency/lockutils.py:333
Jan 05 21:49:45 compute-0 nova_compute[186018]: 2026-01-05 21:49:45.456 186022 DEBUG nova.compute.manager [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] [instance: 62f57876-af2d-4771-bffd-c87b7755cc5c] Updated the network info_cache for instance _heal_instance_info_cache /usr/lib/python3.9/site-packages/nova/compute/manager.py:9929
Jan 05 21:49:45 compute-0 nova_compute[186018]: 2026-01-05 21:49:45.457 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rebooting_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:49:46 compute-0 nova_compute[186018]: 2026-01-05 21:49:46.771 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:47 compute-0 nova_compute[186018]: 2026-01-05 21:49:47.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_rescued_instances run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:49:47 compute-0 nova_compute[186018]: 2026-01-05 21:49:47.461 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_volume_usage run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:49:48 compute-0 podman[259530]: 2026-01-05 21:49:48.273076352 +0000 UTC m=+0.110534352 container health_status aeacaacb47d418798b384cd139a18d0e247e77a393918f9dd9403a659fcf75cb (image=quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified, name=openstack_network_exporter, health_status=healthy, health_failing_streak=0, health_log=, io.buildah.version=1.33.7, vendor=Red Hat, Inc., io.k8s.display-name=Red Hat Universal Base Image 9 Minimal, maintainer=Red Hat, Inc., io.openshift.tags=minimal rhel9, config_id=openstack_network_exporter, io.k8s.description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., config_data={'command': [], 'environment': {'OPENSTACK_NETWORK_EXPORTER_YAML': '/etc/openstack_network_exporter/openstack_network_exporter.yaml', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/openstack_network_exporter', 'test': '/openstack/healthcheck openstack-netwo'}, 'image': 'quay.io/openstack-k8s-operators/openstack-network-exporter:current-podified', 'net': 'host', 'ports': ['9105:9105'], 'privileged': True, 'recreate': True, 'restart': 'always', 'volumes': ['/var/lib/openstack/telemetry/openstack_network_exporter.yaml:/etc/openstack_network_exporter/openstack_network_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/openstack_network_exporter/tls:z', '/var/run/openvswitch:/run/openvswitch:rw,z', '/var/lib/openvswitch/ovn:/run/ovn:rw,z', '/proc:/host/proc:ro', '/var/lib/openstack/healthchecks/openstack_network_exporter:/openstack:ro,z']}, url=https://catalog.redhat.com/en/search?searchType=containers, release=1755695350, vcs-ref=f4b088292653bbf5ca8188a5e59ffd06a8671d4b, description=The Universal Base Image Minimal is a stripped down image that uses microdnf as a package manager. This base image is freely redistributable, but Red Hat only supports Red Hat technologies through subscriptions for Red Hat products. This image is maintained by Red Hat and updated regularly., managed_by=edpm_ansible, build-date=2025-08-20T13:12:41, com.redhat.license_terms=https://www.redhat.com/en/about/red-hat-end-user-license-agreements#UBI, version=9.6, summary=Provides the latest release of the minimal Red Hat Universal Base Image 9., distribution-scope=public, com.redhat.component=ubi9-minimal-container, io.openshift.expose-services=, vcs-type=git, architecture=x86_64, container_name=openstack_network_exporter, name=ubi9-minimal)
Jan 05 21:49:48 compute-0 podman[259529]: 2026-01-05 21:49:48.30791718 +0000 UTC m=+0.147843302 container health_status 8bfd29ed6f84fa996b5741bebeab604c5f58862ff12c00a3191f26f7626dedd4 (image=quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified, name=ovn_controller, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, org.label-schema.name=CentOS Stream 9 Base Image, org.label-schema.schema-version=1.0, tcib_managed=true, config_data={'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_controller', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-ovn-controller:current-podified', 'net': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/lib/modules:/lib/modules:ro', '/run:/run', '/var/lib/openvswitch/ovn:/run/ovn:shared,z', '/var/lib/kolla/config_files/ovn_controller.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/openstack/cacerts/ovn/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/ovn/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/ovn/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_controller:/openstack:ro,z']}, container_name=ovn_controller, maintainer=OpenStack Kubernetes Operator team, org.label-schema.build-date=20251202, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, io.buildah.version=1.41.3, managed_by=edpm_ansible, org.label-schema.vendor=CentOS, config_id=ovn_controller)
Jan 05 21:49:48 compute-0 nova_compute[186018]: 2026-01-05 21:49:48.492 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:51 compute-0 nova_compute[186018]: 2026-01-05 21:49:51.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._poll_unconfirmed_resizes run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:49:51 compute-0 nova_compute[186018]: 2026-01-05 21:49:51.773 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:52 compute-0 nova_compute[186018]: 2026-01-05 21:49:52.460 186022 DEBUG oslo_service.periodic_task [None req-7366769c-1e7d-4fc3-8292-76a793744633 - - - - - -] Running periodic task ComputeManager._instance_usage_audit run_periodic_tasks /usr/lib/python3.9/site-packages/oslo_service/periodic_task.py:210
Jan 05 21:49:52 compute-0 ovs-vsctl[259654]: ovs|00001|db_ctl_base|ERR|no key "dpdk-init" in Open_vSwitch record "." column other_config
Jan 05 21:49:53 compute-0 nova_compute[186018]: 2026-01-05 21:49:53.495 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:53 compute-0 systemd[1]: proc-sys-fs-binfmt_misc.automount: Got automount request for /proc/sys/fs/binfmt_misc, triggered by 259417 (sos)
Jan 05 21:49:53 compute-0 systemd[1]: Mounting Arbitrary Executable File Formats File System...
Jan 05 21:49:53 compute-0 systemd[1]: Mounted Arbitrary Executable File Formats File System.
Jan 05 21:49:53 compute-0 podman[259699]: 2026-01-05 21:49:53.732212442 +0000 UTC m=+0.080592947 container health_status 8d8acaf3747f585de5cdbed0a7268182e39e394915965a523c6987a53bfc3094 (image=quay.io/navidys/prometheus-podman-exporter:v1.10.1, name=podman_exporter, health_status=healthy, health_failing_streak=0, health_log=, config_data={'command': ['--web.config.file=/etc/podman_exporter/podman_exporter.yaml'], 'environment': {'CONTAINER_HOST': 'unix:///run/podman/podman.sock', 'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/podman_exporter', 'test': '/openstack/healthcheck podman_exporter'}, 'image': 'quay.io/navidys/prometheus-podman-exporter:v1.10.1', 'net': 'host', 'ports': ['9882:9882'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/podman_exporter.yaml:/etc/podman_exporter/podman_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/podman_exporter/tls:z', '/run/podman/podman.sock:/run/podman/podman.sock:rw,z', '/var/lib/openstack/healthchecks/podman_exporter:/openstack:ro,z']}, config_id=podman_exporter, container_name=podman_exporter, maintainer=Navid Yaghoobi <navidys@fedoraproject.org>, managed_by=edpm_ansible)
Jan 05 21:49:53 compute-0 podman[259697]: 2026-01-05 21:49:53.74755236 +0000 UTC m=+0.089831936 container health_status 490be1719b1ca23d97faebda7b0ba0b90360243ead53e6bbad3c2740018cfb39 (image=quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified, name=ovn_metadata_agent, health_status=healthy, health_failing_streak=0, health_log=, org.label-schema.license=GPLv2, config_id=ovn_metadata_agent, container_name=ovn_metadata_agent, io.buildah.version=1.41.3, org.label-schema.build-date=20251202, org.label-schema.name=CentOS Stream 9 Base Image, maintainer=OpenStack Kubernetes Operator team, org.label-schema.schema-version=1.0, org.label-schema.vendor=CentOS, tcib_managed=true, tcib_build_tag=c3923531bcda0b0811b2d5053f189beb, config_data={'cgroupns': 'host', 'depends_on': ['openvswitch.service'], 'environment': {'KOLLA_CONFIG_STRATEGY': 'COPY_ALWAYS', 'EDPM_CONFIG_HASH': 'ee77216cd420c0fdd767bcb5cbc85ebc4cae68a9e9de01f2444ba7085ce60b8e-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-0823bd3e096c75f72e4a95820d41b0d4b6a1172bd2892ddb9f29b788a11bc87d'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/ovn_metadata_agent', 'test': '/openstack/healthcheck'}, 'image': 'quay.io/podified-antelope-centos9/openstack-neutron-metadata-agent-ovn:current-podified', 'net': 'host', 'pid': 'host', 'privileged': True, 'restart': 'always', 'user': 'root', 'volumes': ['/run/openvswitch:/run/openvswitch:z', '/var/lib/openstack/neutron-ovn-metadata-agent:/etc/neutron.conf.d:z', '/run/netns:/run/netns:shared', '/var/lib/kolla/config_files/ovn_metadata_agent.json:/var/lib/kolla/config_files/config.json:ro', '/var/lib/neutron:/var/lib/neutron:shared,z', '/var/lib/neutron/ovn_metadata_haproxy_wrapper:/usr/local/bin/haproxy:ro', '/var/lib/neutron/kill_scripts:/etc/neutron/kill_scripts:ro', '/var/lib/openstack/cacerts/neutron-metadata/tls-ca-bundle.pem:/etc/pki/ca-trust/extracted/pem/tls-ca-bundle.pem:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/ca.crt:/etc/pki/tls/certs/ovndbca.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.crt:/etc/pki/tls/certs/ovndb.crt:ro,z', '/var/lib/openstack/certs/neutron-metadata/default/tls.key:/etc/pki/tls/private/ovndb.key:ro,Z', '/var/lib/openstack/healthchecks/ovn_metadata_agent:/openstack:ro,z']}, managed_by=edpm_ansible)
Jan 05 21:49:54 compute-0 virtqemud[185616]: Failed to connect socket to '/var/run/libvirt/virtnetworkd-sock-ro': No such file or directory
Jan 05 21:49:54 compute-0 virtqemud[185616]: Failed to connect socket to '/var/run/libvirt/virtnwfilterd-sock-ro': No such file or directory
Jan 05 21:49:54 compute-0 virtqemud[185616]: Failed to connect socket to '/var/run/libvirt/virtstoraged-sock-ro': No such file or directory
Jan 05 21:49:55 compute-0 crontab[260123]: (root) LIST (root)
Jan 05 21:49:56 compute-0 nova_compute[186018]: 2026-01-05 21:49:56.776 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:57 compute-0 systemd[1]: Starting Hostname Service...
Jan 05 21:49:57 compute-0 systemd[1]: Started Hostname Service.
Jan 05 21:49:58 compute-0 nova_compute[186018]: 2026-01-05 21:49:58.496 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:49:59 compute-0 podman[202426]: time="2026-01-05T21:49:59Z" level=info msg="List containers: received `last` parameter - overwriting `limit`"
Jan 05 21:49:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:49:59 +0000] "GET /v4.9.3/libpod/containers/json?all=true&external=false&last=0&namespace=false&size=false&sync=false HTTP/1.1" 200 28507 "" "Go-http-client/1.1"
Jan 05 21:49:59 compute-0 podman[202426]: @ - - [05/Jan/2026:21:49:59 +0000] "GET /v4.9.3/libpod/containers/stats?all=false&interval=1&stream=false HTTP/1.1" 200 4374 "" "Go-http-client/1.1"
Jan 05 21:50:00 compute-0 podman[260470]: 2026-01-05 21:50:00.739151188 +0000 UTC m=+0.074704675 container health_status b8e9cde66d3e9b0492915ba6686c4d9aafb9cf66bd9f3ebc2bf8620f0525a3e4 (image=quay.io/prometheus/node-exporter:v1.5.0, name=node_exporter, health_status=healthy, health_failing_streak=0, health_log=, container_name=node_exporter, maintainer=The Prometheus Authors <prometheus-developers@googlegroups.com>, managed_by=edpm_ansible, config_data={'command': ['--web.config.file=/etc/node_exporter/node_exporter.yaml', '--web.disable-exporter-metrics', '--collector.systemd', '--collector.systemd.unit-include=(edpm_.*|ovs.*|openvswitch|virt.*|rsyslog)\\.service', '--no-collector.dmi', '--no-collector.entropy', '--no-collector.thermal_zone', '--no-collector.time', '--no-collector.timex', '--no-collector.uname', '--no-collector.stat', '--no-collector.hwmon', '--no-collector.os', '--no-collector.selinux', '--no-collector.textfile', '--no-collector.powersupplyclass', '--no-collector.pressure', '--no-collector.rapl'], 'environment': {'OS_ENDPOINT_TYPE': 'internal', 'EDPM_CONFIG_HASH': 'ee0ca8ac1cdd832465684dae6db440f27418a13638b6677e978a19a8fc6f7acd-bfe64a9be4ad33b711c387c52062c246b4ab570f953402ac6b4a5261a3dbcbc6'}, 'healthcheck': {'mount': '/var/lib/openstack/healthchecks/node_exporter', 'test': '/openstack/healthcheck node_exporter'}, 'image': 'quay.io/prometheus/node-exporter:v1.5.0', 'net': 'host', 'ports': ['9100:9100'], 'privileged': True, 'recreate': True, 'restart': 'always', 'user': 'root', 'volumes': ['/var/lib/openstack/telemetry/node_exporter.yaml:/etc/node_exporter/node_exporter.yaml:z', '/var/lib/openstack/certs/telemetry/default:/etc/node_exporter/tls:z', '/var/run/dbus/system_bus_socket:/var/run/dbus/system_bus_socket:rw', '/var/lib/openstack/healthchecks/node_exporter:/openstack:ro,z']}, config_id=node_exporter)
Jan 05 21:50:01 compute-0 openstack_network_exporter[205720]: ERROR   21:50:01 appctl.go:174: call(dpif-netdev/pmd-perf-show): please specify an existing datapath
Jan 05 21:50:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:50:01 compute-0 openstack_network_exporter[205720]: ERROR   21:50:01 appctl.go:174: call(dpif-netdev/pmd-rxq-show): please specify an existing datapath
Jan 05 21:50:01 compute-0 openstack_network_exporter[205720]: 
Jan 05 21:50:01 compute-0 nova_compute[186018]: 2026-01-05 21:50:01.780 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
Jan 05 21:50:03 compute-0 nova_compute[186018]: 2026-01-05 21:50:03.498 186022 DEBUG ovsdbapp.backend.ovs_idl.vlog [-] [POLLIN] on fd 26 __log_wakeup /usr/lib64/python3.9/site-packages/ovs/poller.py:263
